1. Introduction
Until recently, near-surface moisture measurements were normally only possible with in situ instruments like those available from the national Automated Surface Observing System (ASOS) network. The capability to remotely retrieve surface moisture using radar echoes from ground targets (Fabry et al. 1997; Fabry 2004) opens a new paradigm for surface moisture observations. Moisture fields attained from radar refractivity retrievals have higher spatial resolution (typically 4 km) and higher temporal resolution (4–10 min, depending on the volume coverage pattern) than those routinely attained from the ASOS network (Koch and Saleeby 2001).
Briefly, refractivity is related to meteorological parameters and takes the following form (Bean and Dutton 1968): N = 77.6(p/T) + 3.73 × 105(e/T 2), where p represents the air pressure in hectopascals, T represents the absolute air temperature in kelvins, and e represents the vapor pressure in hectopascals. The first term in this equation is referred to as the air density term, while the second term is referred to as the moisture term. Near the surface of the earth, with relatively warm temperatures, most of the spatial variability in N results from changes in the moisture term. Hence, gradients in the refractivity fields may be used to diagnose gradients in the near-surface moisture and/or temperature. To diagnose temporal changes in refractivity, scan-to-scan refractivity differences can be computed. These high-resolution observations of near-surface moisture are considered important to the pursuit of our further understanding and better prediction of convective processes (e.g., convective precipitation and its intensification) because the existing surface instruments simply do not provide sufficient spatial and temporal resolutions (e.g., Emanuel et al. 1995; Dabberdt and Schlatter 1996; National Research Council 1998).
During the International H2O Project (IHOP 2002), radar refractivity retrievals from the National Center for Atmospheric Research’s S-band dual-polarization Doppler radar (S-Pol) were one of many moisture measurements taken to improve our understanding of the role of near-surface moisture in convective processes (Weckwerth et al. 2004). The application of Fabry’s radar refractivity technique to the S-Pol time series data provided the opportunity to investigate the moisture information contained in the refractivity fields (Weckwerth et al. 2004, 2005; Demoz et al. 2006; Fabry 2006; Buban et al. 2007). Although these studies illustrate the capability of using refractivity fields to identify strong moisture gradients associated with a variety of weather phenomena such as fronts, outflows, and drylines, a pertinent question largely unexplored to date is the utility of refractivity fields to forecasters at National Weather Service (NWS) Weather Forecast Offices (WFOs).
The integration of users, in this case forecasters, into the research and development process has gained importance among meteorologists, as evidenced by both the development of testbeds supported by the National Oceanic and Atmospheric Administration (NOAA 2008) and the growth of the Weather and Society Integrated Studies (WAS*IS) movement, a community of professionals working toward the integration of meteorology and social sciences into research (Demuth et al. 2007). As discussed by Morss et al. (2005), incorporating user needs at the beginning and throughout the research and development processes is pivotal to producing the most usable scientific knowledge or information.
Testbeds and other formalized experiments help the meteorological community more quickly assess the usefulness of new tools (instruments, models, research discoveries) and bridge traditional gaps between research and operations. One example is the annual NOAA Hazardous Weather Test Bed Experimental Forecast Program (HWT EFP), conducted by the National Severe Storms Laboratory (NSSL) and the Storm Prediction Center (SPC), which grew from the initial Spring Program in 2000 (Kain et al. 2003). Because this is an evolving program, the scientific goals change annually. What remains unchanged is the overarching goal of the program to be “mutually beneficial to the participating operational and research organizations” (Kain et al. 2003). A commendable outcome of the program has been the development of new research projects that were not initially anticipated. The ongoing, interactive nature of the NOAA HWT EFP exemplifies the end-to-end-to-end research model of Morss et al. (2005).
Several research-to-operations programs have demonstrated significant benefits from interactions with stakeholders. The Joint Doppler Operational Project (JDOP), conducted from 1977 to 1979 by the National Severe Storms Laboratory, NWS, U.S. Air Force, and Federal Aviation Administration, demonstrated significant improvements to the lead times for tornadoes and severe weather warning statistics from the application of Doppler weather radar to operations. These improvements to warning operations instigated the nationwide implementation of the Next-Generation Doppler Radar (NEXRAD) network (Whiton et al. 1998).
Similarly, advances in dual-polarization radar research led to the 2002/03 Joint Polarization Experiment (JPOLE; Scharfenberg et al. 2005) conducted at the Norman, Oklahoma, WFO. The operational evaluation of 1) base polarimetric radar variables measured by the polarimetric Weather Surveillance Radar-1988 Doppler (WSR-88D) at Norman (KOUN), 2) hydrometeor classification products, and 3) quantitative precipitation estimates showed the potential for polarimetric radar to significantly benefit the decision making and forecasts of operational meteorologists. The JPOLE also contributed to the decision to upgrade the WSR-88D network with dual-polarization technology, which is anticipated during 2010–12.
Given the successful attainment of refractivity retrievals during IHOP 2002 (Weckwerth et al. 2004), and the demonstrated potential to use refractivity fields to identify strong moisture gradients associated with a variety of weather phenomena (Weckwerth et al. 2004, 2005; Demoz et al. 2006; Fabry 2006; Buban et al. 2007), a relevant question is: How useful are diagnostic features of retrieved refractivity to an operational forecaster? An initial exploration of the answer to this question was conducted by the Colorado Refractivity Experiment for H20 Research and Collaborative Operational Technology Transfer (REFR
The primary objective of the spring 2007 and 2008 Refractivity Experiments at the Oklahoma City–Twin Lakes, Oklahoma, NEXRAD weather radar (KTLX) was to gain an understanding of both the benefits and limitations of the use of refractivity retrievals, as a supplemental dataset, in an operational environment. In this study, forecasters at the Norman WFO participated in the experiments, which ran from 18 April to 22 June 2007 and from 15 April to 20 June 2008. During each experiment, two refractivity fields—refractivity and scan-to-scan refractivity change—were acquired from time series data measured at KTLX (Fig. 1). Forecasters were invited to evaluate the operational benefits and limitations of these products by responding to a questionnaire. Four operationally relevant measures of refractivity fields evaluated were 1) depiction of near-surface moisture fields, 2) forecast benefits, 3) impacts on the forecast, and 4) importance of implementing the technique at all WFOs.
One advantage of working with the Norman WFO was forecaster access to a network of mesoscale surface observations called the Oklahoma Mesonet (Fig. 1; Brock et al. 1995; McPherson et al. 2007). The Oklahoma Mesonet was considered advantageous because it provided forecasters with a comparison dataset within the refractivity domain (∼40 km) that they could use to assess the compatibility and relative advantage of refractivity retrievals to operations. It is important to note, however, that the Oklahoma Mesonet, which serves WFOs in Norman and Tulsa, Oklahoma, and Amarillo, Texas, is fairly unique in its density of stations (122 total, ≥1 per county) and data quality standards (McPherson et al. 2007). Hence, not all WSR-88D sites have as many observational surface stations within a 40-km range as KTLX (e.g., seven observations). Because most WSR-88Ds are located near high-density population centers, however, having two or more surface stations located within the ground clutter field is common (more information online at http://www.eol.ucar.edu/projects/hydrometnet). Further, the number of mesonet networks is growing and is a key component of the observational architecture laid out by the National Academies of Science (National Research Council 2009). A few examples of other mesonet networks available to WFOs are the West Texas Mesonet and the New Jersey Weather and Climate Network.
This paper is organized as follows. The design of the 2007 and 2008 experiments is described in section 2. Findings from forecaster responses to the questionnaires and real-time examples are discussed in section 3, and a discussion and remaining questions are presented in section 4.
2. Experiment design
This experiment employed the University of Oklahoma refractivity retrieval technique (Cheong et al. 2008), which is based on the work of Fabry et al. (1997) and Fabry (2004). As first shown by Fabry et al. (1997), because changes in the refractive index (caused by variations in atmospheric conditions) impact the propagation speed of electromagnetic waves, radar-measured changes in the path-integrated phase of electromagnetic waves, backscattered by ground clutter targets, can be used to obtain radar-based refractivity retrievals. In this study, implementation differences from Fabry (2004) lie in the processing approaches for smoothing, interpolation, and derivative calculation (Cheong et al. 2008). Additionally, the retrieval technique of Cheong et al. (2008) is designed within a modular architecture to provide flexible portability, which allows for the application of the processing software to most radars with minimal changes. Readers interested in the details of the technique employed in the experiment are referred to Cheong et al. (2008).
Figure 2 illustrates the data flow from the raw in-phase and quadrature (I/Q) data to the fully processed radar field that were presented to the weather forecasters for field evaluation. We used a Local Data Manager (LDM), developed by Unidata, as the communication software between the workstation at the radar and the refractivity processing unit. This uses the standard moment and phase data to process for refractivity fields. This unit is connected to the Warning Decision Support System-Integrated Information (WDSS-II) data server at the National Weather Center (NWC) in Norman, which stores and serves the radar fields. Finally, the radar fields are presented to the weather forecasters via the WDSS-II software (Lakshmanan et al. 2007). Figure 3 shows a WDSS-II display of the two products evaluated by forecasters: refractivity and scan-to-scan refractivity change. The scan-to-scan refractivity change is the temporal difference in refractivity computed between consecutive volume scans. This setup, with Oklahoma Mesonet data overlaid, is representative of that used by forecasters during the experiments. In this screenshot, although the refractivity field appears fairly homogeneous, over the north-central part of the domain, the scan-to-scan refractivity field reveals a band of positive refractivity indicative of an increase in near-surface moisture. Because the refractivity field is derived from ground clutter, the map’s domain extends out to about 40 km in range from KTLX.
To obtain forecaster evaluations of the benefits and limitations of the refractivity and scan-to-scan refractivity fields described above, the first author designed a sampling strategy, training materials, and questionnaire appropriate to this task. A description of these elements follows.
a. Sampling
A purposive sampling strategy, called critical case sampling (Patton 1990), was used to select the Norman WFO as the site for this evaluation of refractivity fields. In critical case sampling, a site is strategically chosen as one that would “yield the most information and have the greatest impact on the development of knowledge” (Patton 1990). Here, knowledge means the current potential for refractivity fields to improve NWS forecast services. The Norman WFO was chosen due to its rich history of providing meaningful evaluations on the operational use of new technologies, including the evaluation of WSR-88D data in the 1970s (JDOP; Whiton et al. 1998) and 1980s (DOPLIGHT ’87, Doswell and Flueck 1989), and more recently, the evaluation of dual-polarimetric data (Scharfenberg et al. 2005). Though the Norman WFO is an appropriate site for this study, it is certainly possible that the findings from another strategically chosen site could differ from the results reported upon here. The findings of this study are most representative of WFOs with qualitatively similar weather situations.
Because participation in the evaluation was outside of the forecasters’ official job responsibilities, involvement was voluntary. As a result, the sample size is limited by many factors, such as willingness to participate, time constraints, and official job responsibilities. In this study, the sampling was terminated when participants had exhausted what they had to say. As will be discussed in section 5, this state was reached by the end of the 2008 experiment, following the completion of 41 questionnaires by seven participants.
b. Forecaster training
Education materials were designed with the forecaster audience and the experiment’s goals in mind. For the initial training in 2007, an interactive lecture instructional strategy was chosen to maintain interest and motivation, and to assess whether the forecasters were meeting learning outcomes. To prepare forecasters to evaluate the utility of refractivity fields (refractivity and scan-to-scan refractivity change), education materials were developed to help participants achieve the following four learning outcomes:
define and describe the meteorological uses of refractivity measurements,
explain why and how refractivity can be derived from ground clutter targets,
demonstrate the ability to interpret refractivity fields, and
envision how refractivity fields may be used in operations.
Another important component of the education materials was a demonstration of the WDSS-II display that forecasters would use to interpret refractivity fields. The demonstration included those aspects of the WDSS-II that were most important to the successful interpretation of refractivity fields, such as auto-update and looping, data readout, and the overlay of Oklahoma Mesonet observations. As mentioned previously, the ability to overlay Oklahoma Mesonet observations, in particular dewpoint temperature, provided forecasters with a comparison dataset familiar to them. Overall, 13 meteorologists, including forecasters, interns, and managers, attended the training sessions. During each training session, meteorologists were also officially invited to participate in the experiment and given a copy of an “informed consent” form, to advise them of their rights and to ensure them that their responses would remain anonymous.
Prior to the 2008 experiment, an interactive Web-based module was produced to help forecasters review refractivity concepts and practice interpreting refractivity and scan-to-scan refractivity change fields collected during the 2007 experiment. The Web-based module format allowed forecasters to take the refresher training when it best fit their schedules.
c. Questionnaire
The questionnaire was designed with two primary factors in mind: the audience (i.e., forecasters) and the purpose of the experiment. Owing to the longevity of the experiments—approximately 2 months each—and the fatigue of forecasters at the end of each shift, the questionnaire length was limited to nine questions. The time required to complete the questionnaire was also minimized by producing a mix of multiple choice, Likert-scale, and open-ended questions that would measure the benefits and limitations of refractivity fields to operational forecasting. Multiple-choice questions allowed forecasters to quickly share contextual information such as forecast concerns and whether they were issuing short-term or long-term forecasts [questions 1 and 2 (Q1 and Q2); see the appendix]. A Likert scale is a method of applying quantitative values to qualitative data. In this study, this type of question asks the forecaster to rate their qualitative assessments of radar refractivity fields with a five-point scale. To attain a reasonably complete depiction of forecast utility, a set of Likert-scale and open-ended questions were designed to evaluate four operationally relevant measures of refractivity fields: 1) depiction of near-surface moisture fields (Q3), 2) forecast benefits (Q5), 3) impacts on the forecast (Q6), and 4) the importance of implementing the technique operationally (Q8). Responses to open-ended questions provided the information required to explain the meaning of numeric ratings. Following this initial questionnaire design in 2007, a subset of forecasters were asked to assess the validity of each of the items, that is, whether the item measures what the designer thinks it measures (Ary et al. 2002). Forecaster comments were incorporated into the wording and format of the questionnaire.
During the KTLX Spring 2007 Refractivity Experiment, 24 questionnaires were completed by seven participants. A preliminary analysis indicated that, during all events, forecasters rated the impact of refractivity retrievals on their forecasts as relatively low (1–2 versus 5–6, with a rating of 6 being the highest), compared to other observations of low-altitude moisture (e.g., Oklahoma Mesonet). They also rated low the importance of making the refractivity retrieval technique operational. Written responses indicated that, within the ∼40 km domain, the relative lack of unique information provided by the refractivity data, compared to the operational Oklahoma Mesonet or radar reflectivity data, was a primary reason for the low ratings of the impact of refractivity retrievals on forecasts and the importance of operational implementation.
Based on this preliminary analysis, we decided that a second refractivity experiment was needed to attain additional information about how having Oklahoma Mesonet data impacts forecaster perceptions of the usefulness of refractivity fields, and to assess the repeatability of the findings. As mentioned in the introduction, because not all WFOs have access to the same density of surface observations, attaining this information would help to determine the applicability of findings from this study to other sites. To attain this information, a question asking how having Oklahoma Mesonet data impacts the usefulness of refractivity fields was added to the original questionnaire (Q7).
The preliminary analysis also provided an understanding of how the participants interacted with the questionnaire and what small changes were desirable to assure that the needed information was attained, while maintaining the validity of the questionnaire. For example, although Q3 asked participants only for ratings of their confidence in the depiction of the moisture field by refractivity retrievals, and explanations thereof, most written explanations also included a description of the forecaster’s interpretation of these fields. These additional explanations not only exemplified the thoughtful feedback the Norman WFO is accustomed to providing, but also illustrated that the training they received (section 2b) gave them the background needed to correctly interpret the refractivity fields. To continue attaining the insightful information about forecasters’ understanding of refractivity fields, a question asking participants to describe their interpretation of the refractivity fields was added (Q4).
Additionally, in their written responses to Q3, several forecasters requested the capability to choose the refractivity range, which spanned 100 N units in the 2007 experiment, to more clearly depict moisture gradients when they span a relatively small range of refractivity values (e.g., 5 N-units). In response to this request, a set of five refractivity products was provided to users in 2008: the original, 100-N-unit refractivity field, and refractivity fields with the following 60-N-unit ranges: 240–300, 270–330, 300–360, and 330–390. Also, Q3 was expanded to provide the opportunity to rate the depiction of moisture fields by this set of refractivity products. It is possible that other representations of the refractivity data may also improve forecaster apprehension of refractivity fields.
After adding Q4 and Q7 to the questionnaire, to maintain its original length (nine items) and its validity, two questions asking for contextual information, which had limited usefulness, were removed. All other questions were retained to provide a basis for comparison between the forecaster responses in 2007 and 2008. This refined questionnaire was used in the 2008 spring experiment (see the appendix). Since the questionnaire studied the thought processes and knowledge of participants, we attained approval for the experiment from the University of Oklahoma’s Office of Human Participant Protection Institutional Review Board, prior to the starting date of the 2007 and 2008 experiments. The findings from the analysis of forecaster responses to the questionnaire are presented in the next section.
3. Operational assessment of refractivity fields
As mentioned previously, participants completed 24 responses to the questionnaire during the Spring 2007 KTLX Refractivity Experiment; 17 responses were completed in 2008. The 41 total responses to the questionnaire were attained from the seven forecasters that participated in the 2007 and 2008 experiments. These seven forecasters each had 5 yr or more experience in the National Weather Service. Questionnaire responses evaluated the operational use of refractivity fields on 31 different days. The number of responses is higher than the number of days (41 versus 31, respectively) because at times more than one forecaster completed a questionnaire on the same day. On these 31 days, forecasters evaluated the use of the refractivity fields for the following weather situations:
boundary layer evolution in clear-air conditions (M = 10, where M = number of cases);
passage of outflow boundaries (M = 8), cold fronts (M = 6), and drylines (M = 4);
moisture advection (M = 2); and
warming and drying ahead of a prefrontal trough (M = 1).
Figure 4 illustrates how refractivity retrievals depicted the near-surface environment during three events, including an outflow boundary, a cold front, and moisture advection. On 4 July 2007, the scan-to-scan refractivity change (computed between two volume scans) portrayed a westward-moving outflow boundary as a north–south-oriented band of positive scan-to-scan change in refractivity, which indicates cooling and moistening of the environment (Fig. 4a). On 15 May 2007, a northwest–southeast-oriented gradient in the refractivity field implies the presence of a moisture gradient behind a southeastward-progressing cold front, indicated also by the 1°–2°C difference in surface dewpoint temperature between Oklahoma Mesonet stations across this region (Fig. 4b). On 7 April 2008, a domain-wide, 10–15-N-unit increase in refractivity values over 3 h indicates a dramatic increase in near-surface dewpoint temperature due to moisture advection (Fig. 4c).
The capability of refractivity retrievals to depict these and other types of weather situations is well documented (Weckwerth et al. 2005; Demoz et al. 2006; Fabry 2006; Roberts et al. 2008), while the assessment by forecasters of potential service improvements owing to refractivity fields is lacking. The next section describes the analysis methods employed to interpret forecaster responses from the 2007 and 2008 experiments and key findings.
a. Analysis methods
Because the questionnaire included a mix of multiple choice, Likert-scale, and open-ended questions (see the appendix), both statistical (Wilks 2006) and thematic qualitative analysis methods (Boyatzis 1998) were employed. The statistical analysis provides information about the distribution of the ratings and the quantitative relationships among them, whereas the qualitative analysis explains the meaning of the statistical findings (Miles and Huberman 1994), which was attained by coding and categorizing written responses from forecasters (Boyatzis 1998; Creswell 2002). For example, to understand how forecasters assigned confidence ratings to refractivity fields (Q3; see the appendix), forecaster explanations were analyzed individually to determine their criteria, and then grouped based on similarity. Additionally, descriptions of the refractivity fields provided in response to questions 3, 4, and 9 (appendix) were analyzed by noting key characteristics and then grouping those that were similar. The characterization of refractivity fields supported meaningful explanations of forecaster ratings. To ensure that written responses were interpreted correctly, forecasters were asked to participate in postanalysis interviews; five of the seven forecasters participated. This joint statistical and qualitative analysis provides a richer, more substantial evaluation than a statistical assessment alone because it includes the reasoning behind the forecaster ratings.
b. Refractivity retrievals: User confidence and impact on the forecast
To evaluate the fidelity of the refractivity fields, forecasters were asked to rate their confidence in the refractivity retrievals’ depiction of the near-surface moisture field and explain their ratings (Q3; see the appendix). In the written explanations that accompanied the ratings, forecasters indicated that they based their confidence ratings on two criteria: 1) how closely variations within the refractivity fields related to variations in Oklahoma Mesonet, KTLX reflectivity factor (hereafter, reflectivity), or visible satellite data and 2) the comparative utility of the two refractivity fields.
A joint analysis of the first criterion and forecaster ratings suggests that confidence ratings of 3 or higher signified similar features among datasets, whereas ratings of 2 or lower signified some dissimilarity. The distribution of 2007 and 2008 confidence ratings of refractivity (100-N-unit) and scan-to-scan refractivity products (Fig. 5) shows that forecasters assigned a rating of 3 or higher to refractivity less often than to scan-to-scan refractivity: 58% versus 70%, respectively. Based on the joint analysis, this quantitative finding implies that forecasters found some dissimilarity in the moisture and or temperature features depicted in the refractivity fields, compared to those depicted in other datasets, more often than in the scan-to-scan refractivity field. Given the broad range of N-unit values used in the refractivity product, it is possible that the higher percentage of events with dissimilarities between refractivity fields and other observations were due to the resolution of moisture and/or temperature features resolved by the 100-N-unit scale. The forecasters’ requests for more detailed displays of refractivity, following the 2007 experiment (section 2c), appear to support this speculation. To explore this idea further, we compared the distribution of 100-N-unit refractivity confidence ratings to the distribution of 2007 ratings of the 100-N-unit refractivity product, combined with the maximum ratings from the set of five refractivity products available in 2008 (one 100-N-unit and four 60-N-unit refractivity products; see Fig. 5). This comparison shows that forecasters tended to rate their confidence in one of the more detailed displays of refractivity higher than the 100-N-unit refractivity field, resulting in a 9% increase in the percentage of ratings 3 and higher (58%–67%). This result indicates that the broad color scale likely played a role in the confidence ratings assigned by forecasters.
During the analysis, the above findings led to the following question: Under what circumstances did forecasters rate refractivity fields similar to, higher than, or lower than the scan-to-scan refractivity fields? The joint analysis of confidence ratings and written responses reveals that forecasters rated the refractivity fields similarly when they provided equivalent information about the environment. Forecasters said that they preferred the refractivity field (rated higher than the scan-to-scan refractivity) when the moisture field was nearly homogeneous or when trends were due primarily to the diurnal cycle. Under these conditions, forecasters noted that the small, fluctuating changes in scan-to-scan refractivity (1–2-N-units computed from a 10-min difference) were less useful for the diagnosis of the longer-term trends. In contrast, forecasters said that they preferred the scan-to-scan refractivity field (rated higher than refractivity) when gradients in the near-surface moisture field were significant, for example, owing to drylines, cold fronts, or outflow boundaries. Under these conditions, forecasters indicated that the resulting bandlike patterns in the scan-to-scan refractivity were more useful for the diagnosis of the location of gradients progressing across the domain.
To assess the relative utility of refractivity fields to operations, forecasters were asked 1) to list and explain forecast fields or decisions that benefited from the refractivity fields (Q5; see the appendix) and 2) to rate the impacts of refractivity fields and seven other observational datasets on their forecasts (Q6; appendix).
A discussion of findings from forecaster responses follows.
1) Forecast benefits
In 2007 and 2008, participant responses indicated that their forecasts benefited from the refractivity fields on ∼25% (8 of the 31) of the days on which refractivity fields were evaluated. In each of these cases, participants stated that the refractivity fields provided complementary information that enhanced both their capability to analyze the near-surface environment and their confidence in moisture trends. Examples of benefits to the forecast noted by participants are listed in Table 1, including an improved capability to track the speed of a front, higher confidence in moisture trends, and greater confidence in the passage of a front when the associated wind shift was subtle. More significantly, three of the forecast benefits described by participants included information about the environment unavailable from other observational platforms: 1) increased knowledge of trends in moisture near the dryline at scales smaller than those measured by the Oklahoma Mesonet, 2) the stability directly behind an outflow boundary, and 3) the ability to track a retreating dryline after its location was obscured by a weak reflectivity bloom caused by biological scatterers. Because the utility of refractivity during reflectivity blooms has not been shown in previous studies, this case is presented to illustrate a forecaster’s perspective on the utility of refractivity fields during operations.
During the afternoon and early evening of 22 April 2007, the forecaster found that the southeastward progression of the dryline over central Oklahoma was depicted clearly in both the KTLX reflectivity and refractivity field (not shown). In the evening, as the dryline began to retreat northwestward, the forecaster noticed that “the weak reflectivity field bloomed and masked the dryline [Fig. 6a]. However, we were still able to track its progress using the scan-to-scan refractivity” (Fig. 6c). In Fig. 6, the northwestward retreat of the dryline from 0230 to 0534 UTC is depicted in hourly time sequence plots of both refractivity and scan-to-scan refractivity change. Initially, the broad region of relatively low refractivity values across the domain indicates that the dryline is located southeast of the domain (Fig. 6bi); the dryline location was confirmed using dewpoint temperatures from the Oklahoma Mesonet sites. Over the next 3 h, the northwestward movement of the dryline is depicted by the leading edge of both the refractivity gradient and the collocated band of increasing scan-to-scan refractivity values (Figs. 6bii–6biv and 6cii–6civ). The increasingly higher refractivity values southeast of the retreating dryline signify the replacement of dry air with increasingly moist air, due in part to nighttime diurnal cycle processes (e.g., upward latent heat flux). This refractivity-indicated resurgence in moisture, following the passage of the dryline, is corroborated by a 7°C increase in dewpoint temperature at the Shawnee, Oklahoma, site by 0328 UTC; a 6°C increase at the Norman site by 0426 UTC; and 1°–4°C increases at sites in the Oklahoma City area by 0534 UTC.
In this case, the forecaster remarked that the regional weather discussion benefited from the additional information about the location of the dryline provided by the refractivity fields. The forecaster also noted the potential operational benefit of “tracking low-level boundaries with refractivity or scan-to-scan refractivity fields after they become more difficult to detect in the reflectivity field.” In addition to extending the detection of boundaries when biological scatterers are present, as shown here, Weckwerth et al. (2005) and Roberts et al. (2008) each show a case in which refractivity fields revealed the development of a boundary before it was depicted in the reflectivity and velocity fields. Weckwerth et al. (2005) attribute the earlier detection of the boundary to the more instantaneous nature of the refractivity retrievals compared to the time required for insects to accumulate within convergence zones. Clearly, refractivity gradients may portray the location of boundaries both in the presence and absence of insects.
2) Impact of refractivity fields on the forecast compared to other observations
As mentioned earlier, forecasters were asked to rate the impacts of refractivity and scan-to-scan refractivity, along with seven other observational platforms and/or analyses, on their forecasts (Q6; see the appendix). The distribution of ratings from both 2007 and 2008 shows that the impact of both refractivity fields on forecasts was relatively low compared to familiar operational observations and analyses (Fig. 7). The highest ratings were given to two benchmark observational platforms: Oklahoma Mesonet and KTLX reflectivity data. Why were the refractivity fields rated low compared to these two observational platforms? To answer this question in relation to the Oklahoma Mesonet, forecasters who completed the 2008 questionnaire were asked, “How do you think that having the Oklahoma Mesonet data impacts the usefulness of refractivity fields?” (Q7; see the appendix). In response to this question, forecasters unanimously said that having data from the Oklahoma Mesonet reduces the usefulness and need for refractivity fields. Most forecasters indicated that this assessment is due, in part, to the similar temporal (∼5 min updates) resolution and sufficient spatial resolution of Oklahoma Mesonet data to identify mesoscale features within the refractivity domain. Additionally, participants liked that the Oklahoma Mesonet provides more direct measurements of temperature and moisture over a much larger domain.
Although as a group the forecasters found the Oklahoma Mesonet to be a better tool for assessing the temperature and moisture fields near the surface than the refractivity fields, a few forecasters also mentioned ways in which the refractivity fields would be beneficial to them. One forecaster stated that refractivity fields would be “beneficial for identifying boundaries unresolved by the Oklahoma Mesonet.” Another forecaster mentioned that refractivity fields would likely “be useful if Oklahoma Mesonet data were missing.” Finally, as indicated previously, a forecaster pointed out that the information provided by refractivity fields can “boost confidence in near-term forecasts.”
An understanding of why the impact of refractivity retrievals on the forecast was rated low compared to the WSR-88D (Fig. 7) was gained from the thematic analysis of the responses written by the participants to Q4, which asked participants to describe their interpretation of the refractivity fields they analyzed (see the appendix). This analysis indicates that the higher “impact on forecast” ratings given to the WSR-88D data, compared to the refractivity retrievals, related most strongly to how well each dataset depicted boundaries within the refractivity domain. When the weather feature of interest was a boundary (e.g., outflow, cold front, or dryline, ∼50% of events), in most cases the participants stated that they observed the boundary clearly in both the reflectivity data and refractivity fields. As discussed earlier, one dryline event was depicted more clearly in the refractivity fields, owing to the obscuration of this feature in the reflectivity field by biological scatterers. On three occasions, however, forecasters noticed situations in which a boundary was depicted better by the reflectivity data than by the refractivity retrievals. One example is 1026–1101 UTC 30 April 2007, when a forecaster observed an outflow boundary in the reflectivity field progressing northeastward (Figs. 8ai–8aiv) that was absent in the refractivity and scan-to-scan refractivity change fields (Figs. 8bi–8biv and 8ci–8civ). The surrounding Oklahoma Mesonet sites revealed that the lack of signal in the refractivity products resulted from the approximately constant dewpoint temperatures and slight increases in temperature associated with the passage of the boundary. This case illustrates an important limitation of refractivity data: boundaries characterized by particularly weak moisture and/or temperature gradients may not be depicted as clearly, or at all, in the refractivity data.
4. Discussion and remaining questions
This study and earlier research studies on refractivity (Weckwerth et al. 2005; Demoz et al. 2006; Fabry 2006; Buban 2007; Roberts et al. 2008) show a variety of examples in which refractivity fields provide high spatial resolution information about the moisture fields within areas having good ground clutter targets. Although these earlier case studies show some promising capabilities for refractivity fields, such as the diagnosis of boundaries and the moistening and drying of the environment in the boundary layer, they do not definitively demonstrate that this relatively new dataset may be used to improve forecast services.
This study advances our understanding of the utility of refractivity retrievals, as a supplemental dataset, in the diagnosis of near-surface variations in moisture and temperature in an operational environment. The 41 total forecaster responses to the questionnaire reveal the limited, complementary, role played by refractivity retrievals in participants’ real-time data analysis and forecasting. The utility of the data was limited by the size of the refractivity domain (∼40 km), the indirect measurement of moisture and temperature, and oftentimes nonunique observational information that was already being provided by other operational datasets. The complementary role of refractivity retrievals in participants’ forecasts was unsurprising, as the retrievals provided an additional source of moisture and temperature variability information within the relatively small refractivity domain. One forecaster described the complementary nature of the refractivity field in comparison with the Oklahoma Mesonet as follows, “[The Oklahoma Mesonet] decreases ‘need’ [of refractivity fields], but can still ‘use’ refractivity, especially when Mesonet data are unavailable or to boost confidence in near-term forecasts.” Even without access to a mesonet, the feeling that refractivity fields were a complementary dataset was mentioned by a participant in REFR
Regardless of its complementary utility, forecasters unanimously indicated that the refractivity data did not make a significant difference in their forecasts. In postanalysis interviews, when forecasters were specifically asked about the importance of refractivity retrievals to their forecasts, forecasters universally said that they were “hard pressed” to describe a case in which the refractivity fields made a significant difference in their forecast. Hence, it is unsurprising that when asked about the importance of implementing refractivity retrievals operationally, this study’s distribution of forecaster ratings shows relatively low ratings; the median rating was 2, and the 75th percentile rating was 3 (Fig. 9; based on a 1–5 rating scale, with 5 being the highest rating).
Although participant responses to this study clearly show the limited utility of refractivity retrievals to operations, they also raised several relevant, unanswered questions. Might a WFO with significantly poorer surface data coverage find the refractivity retrievals more useful? Do site-specific forecast challenges exist at other WFOs that could be mitigated by the diagnostic information provided by refractivity retrievals? Are there refractivity retrieval applications, yet to be discovered, that would make a more significant contribution to the forecast process?
An application of refractivity retrievals likely worth exploring is the assimilation of refractivity fields into numerical models. In a numerical prediction study, Sun (2005) used refractivity observations from the NCAR S-Pol, collected during IHOP 2002, to assess the impact of assimilating the relative humidity inferred from the refractivity field on convection initiation and the moisture field. Sun (2005) found that the assimilation of the low-level humidity positively impacted the variability of the moisture field and the associated convection initiation of an isolated storm. The possibility that the limited spatial coverage of the refractivity observations produced a storm with a smaller horizontal scale than that observed, though, suggests the need for broader spatial coverage of refractivity observations.
Currently, a team of engineers and atmospheric scientists at the Atmospheric Radar Research Center, Center for Analysis and Prediction of Storms, and the National Severe Storms Laboratory, all located at the National Weather Center in Norman, Oklahoma, are funded by the National Science Foundation to research improvements to short-term forecasts through the data assimilation of refractivity fields. During 2008–10, radar refractivity data will be collected from a suite of seven Doppler radars in Oklahoma, including two operational WSR-88Ds [KTLX, Frederick, Oklahoma (KFDR)], the National Weather Radar Test Bed Phased Array Radar (Zrnić et al. 2007), and the four Collaborative Adaptive Sensing of the Atmosphere (CASA) X-band radars (Brotzge et al. 2006). The increased radar coverage will likely provide a nearly complete swath of refractivity measurements over southwest Oklahoma. These new data, and the refractivity retrievals previously collected in spring 2007 and 2008, will provide a rich dataset from which a climatology of moisture patterns may be derived and new uses of refractivity retrievals explored.
Acknowledgments
Funding for this research was provided by the NOAA/Office of Oceanic and Atmospheric Research under NOAA–University of Oklahoma Cooperative Agreement NA17RJ1227, the U.S. Department of Commerce through coordination with the Radar Operations Center, and the National Science Foundation through Grant ATM0750790. Coauthor David Bodine was supported by an American Meteorological Society fellowship sponsored by the Raytheon Corporation. The authors thank the Radar Operations Center for maintaining the data flow from KTLX during the experiments and the Norman, Oklahoma, WFO for their participation in the experiment. The authors appreciate assistance from Mark Laufersweiler, Kevin Manross, and Travis Smith. We also thank Daphne LaDue for improving the quality of this study by sharing her expertise in questionnaire development and qualitative analysis, and providing constructive comments that improved the paper. We additionally thank Vicki Farmer for helping with figure layout and Conrad Ziegler, Dusan Zrnić, Rodger Brown, and Jim LaDue for their helpful reviews of the manuscript. Finally, we thank the anonymous reviewers of this paper for helping to fine-tune the paper prior to publication.
REFERENCES
Ary, D., Jacobs L. C. , and Razavieh A. , 2002: Introduction to Research in Education. 6th ed. Wadsworth, 582 pp.
Bean, B. R., and Dutton E. J. , 1968: Radio Meteorology. Dover, 435 pp.
Boyatzis, R. E., 1998: Transforming Qualitative Information: Thematic Analysis and Code Development. Sage Publications, 184 pp.
Brock, F. V., Crawford K. C. , Elliott R. L. , Cuperus G. W. , Stadler S. J. , Johnson H. L. , and Eilts M. D. , 1995: The Oklahoma Mesonet: A technical overview. J. Atmos. Oceanic Technol., 12 , 5–19.
Brotzge, J., Droegemeier K. K. , and McLaughlin D. J. , 2006: Collaborative Adaptive Sensing of the Atmosphere (CASA): New radar system for improving analysis and forecasting of surface weather conditions. Transport. Res. Record: J. Transport. Res. Board, 1948 , 145–151.
Buban, M., Ziegler C. , Rasmussen E. , and Richardson Y. , 2007: The dryline on 22 May 2002 during IHOP: Ground-radar and in situ data analyses of the dryline and boundary layer evolution. Mon. Wea. Rev., 135 , 2473–2505.
Cheong, B. L., Palmer R. D. , Curtis C. D. , Yu T-Y. , Zrnić D. S. , and Forsyth D. , 2008: Refractivity retrieval using the Phased Array Radar: First results and potential for multimission operation. IEEE Trans. Geosci. Remote Sens., 46 , 2527–2537.
Creswell, J. W., 2002: Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 2nd ed. Sage Publications, 227 pp.
Dabberdt, W. F., and Schlatter T. W. , 1996: Research opportunities from emerging atmospheric observing and modeling capabilities. Bull. Amer. Meteor. Soc., 77 , 305–323.
Demoz, B., and Coauthors, 2006: The dryline on 22 May 2002 during IHOP 2002: Convective-scale measurements at the profiling site. Mon. Wea. Rev., 134 , 294–310.
Demuth, J. L., Gruntfest E. , Morss R. E. , Drobot S. , and Lazo J. K. , 2007: WAS*IS: Building a community for integrating meteorology and social science. Bull. Amer. Meteor. Soc., 88 , 1729–1737.
Doswell, C. A., and Flueck J. A. , 1989: Forecasting and verifying in a field research project: DOPLIGHT ’87. Wea. Forecasting, 4 , 97–109.
Emanuel, K., and Coauthors, 1995: Report of the First Prospectus Development Team of the U.S. Weather Research Program to NOAA and the NSF. Bull. Amer. Meteor. Soc., 76 , 1194–1208.
Fabry, F., 2004: Meteorological value of ground target measurements by radar. J. Atmos. Oceanic Technol., 21 , 560–573.
Fabry, F., 2006: The spatial variability of moisture in the boundary layer and its effect on convective initiation: Project-long characterization. Mon. Wea. Rev., 134 , 79–91.
Fabry, F., Frush C. , Zawadzki I. , and Kilambi A. , 1997: On the extraction of near- surface index of refraction using radar phase measurements from ground targets. J. Atmos. Oceanic Technol., 14 , 978–987.
Kain, J. S., Janish P. R. , Weiss S. J. , Baldwin M. E. , Schneider R. S. , and Brooks H. E. , 2003: Collaboration between forecasters and research scientists at the NSSL and SPC: The Spring Program. Bull. Amer. Meteor. Soc., 84 , 1797–1806.
Koch, S. E., and Saleeby S. , 2001: An automated system for the analysis of gravity waves and other mesoscale phenomena. Wea. Forecasting, 16 , 661–679.
Lakshmanan, V., Smith T. , Stumpf G. J. , and Hondl K. , 2007: The Warning Decision Support System-Integrated Information (WDSS-II). Wea. Forecasting, 22 , 596–612.
McPherson, R. A., and Coauthors, 2007: Statewide monitoring of the mesoscale environment: A technical update on the Oklahoma Mesonet. J. Atmos. Oceanic Technol., 24 , 301–321.
Miles, M. B., and Huberman A. M. , 1994: Qualitative Data Analysis. 2nd ed. Sage Publications, 337 pp.
Morss, R. E., Wilhelmi O. V. , Downton M. W. , and Gruntfest E. C. , 2005: Flood risk, uncertainty, and scientific information for decision making: Lessons from an interdisciplinary project. Bull. Amer. Meteor. Soc., 86 , 1593–1601.
National Research Council, 1998: The Atmospheric Sciences: Entering the Twenty-First Century. National Academies Press, 373 pp.
National Research Council, 2009: Observing Weather and Climate from the Ground Up: A Nationwide Network of Networks. National Academies Press, 234 pp.
NOAA, cited. 2008: Research in NOAA—A five year plan: Fiscal years 2008–2012. [Available online at http://www.nrc.noaa.gov/plans_docs/5yrp_2008_2012_final.pdf].
Patton, M. Q., 1990: Qualitative Research and Evaluation Methods. 3rd ed., Sage Publications, 598 pp.
Roberts, R. D., and Coauthors, 2008: REFRACTT-2006: Real-time retrieval of high-resolution low-level moisture fields from operational NEXRAD and research radars. Bull. Amer. Meteor. Soc., 89 , 1535–1538.
Scharfenberg, K. A., and Coauthors, 2005: The Joint Polarization Experiment: Polarimetric radar in forecasting and warning decision making. Wea. Forecasting, 20 , 775–788.
Sun, J., 2005: Convective-scale assimilation of radar data: Progress and challenges. Quart. J. Roy. Meteor. Soc., 131 , 3439–3463.
Weckwerth, T. M., and Coauthors, 2004: An overview of the International H2O Project (IHOP 2002) and some preliminary highlights. Bull. Amer. Meteor. Soc., 85 , 253–277.
Weckwerth, T. M., Pettet C. R. , Fabry F. , Park S. , LeMone M. A. , and Wilson J. W. , 2005: Radar refractivity retrieval: Validation and application to short-term forecasting. J. Appl. Meteor., 44 , 285–300.
Whiton, R. C., Smith P. L. , Bigler S. G. , Wilk K. E. , and Harbuck A. C. , 1998: History of operational use of weather radar by U.S. Weather Services. Part II: Development of operational Doppler weather radars. Wea. Forecasting, 13 , 244–252.
Wilks, D. S., 2006: Statistical Methods in the Atmospheric Sciences. 2nd ed., Academic Press, 627 pp.
Zrnić, D. S., and Coauthors, 2007: Agile beam phased array radar for weather observations. Bull. Amer. Meteor. Soc., 88 , 1753–1766.
APPENDIX
Refractivity Questionnaire
The revised questionnaire forecasters used to evaluate refractivity fields in 2008 is presented below. This survey differs from the 2007 survey in that questions 4 and 7 were added to obtain clearer information about how having surface observations from the Oklahoma Mesonet impacted participant evaluations of refractivity fields. Also, a contextual question asking which shift a forecaster was working was removed.
Refractivity Retrieval Questionnaire
Name: ________________________________________
Title: ________________________________________________
Years Forecasting Experience: ________________________________________________
Date & Period(s) of Evaluation: __________________
Answer the questions below based on your shift forecasts. Mark responses to multiple choice questions with an “x”.
1. During your shift, which of the following were forecast concerns within ∼32 nm (60 km) of KTLX? Please keep these concerns in mind as you complete the questionnaire.
2. Were you concerned about short-term or long-term forecasts?
3. For the refractivity fields you viewed, rate your confidence in their depiction of the near-surface moisture field. Explain your ratings.
Explanation of Ratings:
4. Please describe your interpretation of the refractivity fields you analyzed.
5. List below the forecast field(s) or decisions that benefited from the refractivity retrievals and explain the benefit. (Some example benefits: higher confidence, greater lead time, more accuracy within region where retrievals were available, etc.)
Field: _______________________________
Benefit:
Field: _______________________________
Benefit:
6. Please rate the impact of the fields listed on your forecasts.
7. How do you think having Oklahoma Mesonet data impacts the usefulness of refractivity fields?
8. Rate the importance of incorporating refractivity retrievals into AWIPS at all WFOs. Please explain.
Refractivity
Low
HighRefractivity_Change_SS (scan-to-scan)
Low
High9. Respond with any other comments.

Locations of Oklahoma Mesonet stations and the KTLX WSR-88D located within the refractivity domain.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Locations of Oklahoma Mesonet stations and the KTLX WSR-88D located within the refractivity domain.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1
Locations of Oklahoma Mesonet stations and the KTLX WSR-88D located within the refractivity domain.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Overview of the data flow. The raw data are processed in real time for refractivity fields at the radar site. These fields are sent to a WDSS-II data server at the NWC and distributed to multiple displays for the users.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Overview of the data flow. The raw data are processed in real time for refractivity fields at the radar site. These fields are sent to a WDSS-II data server at the NWC and distributed to multiple displays for the users.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1
Overview of the data flow. The raw data are processed in real time for refractivity fields at the radar site. These fields are sent to a WDSS-II data server at the NWC and distributed to multiple displays for the users.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

An example view of the refractivity and scan-to-scan refractivity with Oklahoma Mesonet data overlaid on WDSS-II. The scan-to-scan refractivity field shows a band of positive refractivity change, indicative of an increase in near-surface moisture.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

An example view of the refractivity and scan-to-scan refractivity with Oklahoma Mesonet data overlaid on WDSS-II. The scan-to-scan refractivity field shows a band of positive refractivity change, indicative of an increase in near-surface moisture.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1
An example view of the refractivity and scan-to-scan refractivity with Oklahoma Mesonet data overlaid on WDSS-II. The scan-to-scan refractivity field shows a band of positive refractivity change, indicative of an increase in near-surface moisture.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Examples of the types of weather events during which forecasters evaluated the use of the refractivity fields: (a) westward-moving outflow boundary (orange line indicates the position of the reflectivity fine line), (b) moisture gradient behind a cold front, and (c) moisture advection.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Examples of the types of weather events during which forecasters evaluated the use of the refractivity fields: (a) westward-moving outflow boundary (orange line indicates the position of the reflectivity fine line), (b) moisture gradient behind a cold front, and (c) moisture advection.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1
Examples of the types of weather events during which forecasters evaluated the use of the refractivity fields: (a) westward-moving outflow boundary (orange line indicates the position of the reflectivity fine line), (b) moisture gradient behind a cold front, and (c) moisture advection.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Distribution of forecaster confidence ratings for the 2007 and 2008 scan-to-scan refractivity (open circle) and 2008 refractivity fields (cross symbol). Also shown are the forecaster confidence ratings of the refractivity in 2007 combined with the maximum confidence rating given to the five refractivity products available in 2008 (plus symbol). The number of samples is 33.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Distribution of forecaster confidence ratings for the 2007 and 2008 scan-to-scan refractivity (open circle) and 2008 refractivity fields (cross symbol). Also shown are the forecaster confidence ratings of the refractivity in 2007 combined with the maximum confidence rating given to the five refractivity products available in 2008 (plus symbol). The number of samples is 33.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1
Distribution of forecaster confidence ratings for the 2007 and 2008 scan-to-scan refractivity (open circle) and 2008 refractivity fields (cross symbol). Also shown are the forecaster confidence ratings of the refractivity in 2007 combined with the maximum confidence rating given to the five refractivity products available in 2008 (plus symbol). The number of samples is 33.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Time sequence of the (a) reflectivity, (b) refractivity, and (c) scan-to-scan refractivity change depicting the northwestward retreat of the dryline during the evening on 22 Apr 2007 (23 Apr in UTC).
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Time sequence of the (a) reflectivity, (b) refractivity, and (c) scan-to-scan refractivity change depicting the northwestward retreat of the dryline during the evening on 22 Apr 2007 (23 Apr in UTC).
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1
Time sequence of the (a) reflectivity, (b) refractivity, and (c) scan-to-scan refractivity change depicting the northwestward retreat of the dryline during the evening on 22 Apr 2007 (23 Apr in UTC).
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Box and whiskers plot of the “impact of field on forecast” ratings for nine observational platforms and fields from the 2007 and 2008 spring experiments. Filled boxes show the distribution of these ratings within the 25th and 75th percentiles and the median is denoted by a thick white line. The outermost braces indicate values within 1.5 × interquartile range (defined as the difference between the 25th and 75th percentiles); horizontal lines indicate outliers.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Box and whiskers plot of the “impact of field on forecast” ratings for nine observational platforms and fields from the 2007 and 2008 spring experiments. Filled boxes show the distribution of these ratings within the 25th and 75th percentiles and the median is denoted by a thick white line. The outermost braces indicate values within 1.5 × interquartile range (defined as the difference between the 25th and 75th percentiles); horizontal lines indicate outliers.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1
Box and whiskers plot of the “impact of field on forecast” ratings for nine observational platforms and fields from the 2007 and 2008 spring experiments. Filled boxes show the distribution of these ratings within the 25th and 75th percentiles and the median is denoted by a thick white line. The outermost braces indicate values within 1.5 × interquartile range (defined as the difference between the 25th and 75th percentiles); horizontal lines indicate outliers.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Time sequence of the (a) reflectivity, (b) refractivity, and (c) scan-to-scan refractivity change on 30 Apr 2007. In this case, an outflow boundary, produced by storms located to the southeast of the refractivity domain, is depicted in the reflectivity field but absent in the refractivity fields.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Time sequence of the (a) reflectivity, (b) refractivity, and (c) scan-to-scan refractivity change on 30 Apr 2007. In this case, an outflow boundary, produced by storms located to the southeast of the refractivity domain, is depicted in the reflectivity field but absent in the refractivity fields.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1
Time sequence of the (a) reflectivity, (b) refractivity, and (c) scan-to-scan refractivity change on 30 Apr 2007. In this case, an outflow boundary, produced by storms located to the southeast of the refractivity domain, is depicted in the reflectivity field but absent in the refractivity fields.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Box and whiskers plot of the “importance of operational implementation” ratings of the refractivity and scan-to-scan refractivity fields from the 2007 and 2008 spring experiments. The configuration is the same as in Fig. 7.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1

Box and whiskers plot of the “importance of operational implementation” ratings of the refractivity and scan-to-scan refractivity fields from the 2007 and 2008 spring experiments. The configuration is the same as in Fig. 7.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1
Box and whiskers plot of the “importance of operational implementation” ratings of the refractivity and scan-to-scan refractivity fields from the 2007 and 2008 spring experiments. The configuration is the same as in Fig. 7.
Citation: Weather and Forecasting 24, 5; 10.1175/2009WAF2222256.1
Weather events during which forecasters found refractivity fields to be of benefit to the forecast.

