Weather Radar Spatiotemporal Saliency: A First Look at an Information Theory–Based Human Attention Model Adapted to Reflectivity Images

David Schvartzman Cooperative Institute for Mesoscale Meteorological Studies, University of Oklahoma, and NOAA/OAR/National Severe Storms Laboratory, Norman, Oklahoma

Search for other papers by David Schvartzman in
Current site
Google Scholar
PubMed
Close
,
Sebastián Torres Cooperative Institute for Mesoscale Meteorological Studies, University of Oklahoma, and NOAA/OAR/National Severe Storms Laboratory, Norman, Oklahoma

Search for other papers by Sebastián Torres in
Current site
Google Scholar
PubMed
Close
, and
Tian-You Yu School of Electrical and Computer Engineering, and Advanced Radar Research Center, and School of Meteorology, University of Oklahoma, Norman, Oklahoma

Search for other papers by Tian-You Yu in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

Forecasters often monitor and analyze large amounts of data, especially during severe weather events, which can be overwhelming. Thus, it is important to effectively allocate their finite perceptual and cognitive resources for the most relevant information. This paper introduces a novel analysis tool that quantifies the amount of spatial and temporal information in time series of constant-elevation weather radar reflectivity images. The proposed Weather Radar Spatiotemporal Saliency (WR–STS) is based on the mathematical model of the human attention system (referred to as saliency) adapted to radar reflectivity images and makes use of information theory concepts. It is shown that WR-STS highlights spatially and temporally salient (attention attracting) regions in weather radar reflectivity images, which can be associated with meteorologically important regions. Its skill in highlighting current regions of interest is assessed by analyzing the WR-STS values within regions in which severe weather is likely to strike in the near future as defined by National Weather Service forecasters. The performance of WR-STS is demonstrated for a severe weather case and analyzed for a set of 10 diverse cases. Results support the hypothesis that WR-STS can identify regions with meteorologically important echoes and could assist in discerning fast-changing, highly structured weather echoes during complex severe weather scenarios, ultimately allowing forecasters to focus their attention and spend more time analyzing those regions.

Corresponding author e-mail: David Schvartzman, david.schvartzman@noaa.gov

Abstract

Forecasters often monitor and analyze large amounts of data, especially during severe weather events, which can be overwhelming. Thus, it is important to effectively allocate their finite perceptual and cognitive resources for the most relevant information. This paper introduces a novel analysis tool that quantifies the amount of spatial and temporal information in time series of constant-elevation weather radar reflectivity images. The proposed Weather Radar Spatiotemporal Saliency (WR–STS) is based on the mathematical model of the human attention system (referred to as saliency) adapted to radar reflectivity images and makes use of information theory concepts. It is shown that WR-STS highlights spatially and temporally salient (attention attracting) regions in weather radar reflectivity images, which can be associated with meteorologically important regions. Its skill in highlighting current regions of interest is assessed by analyzing the WR-STS values within regions in which severe weather is likely to strike in the near future as defined by National Weather Service forecasters. The performance of WR-STS is demonstrated for a severe weather case and analyzed for a set of 10 diverse cases. Results support the hypothesis that WR-STS can identify regions with meteorologically important echoes and could assist in discerning fast-changing, highly structured weather echoes during complex severe weather scenarios, ultimately allowing forecasters to focus their attention and spend more time analyzing those regions.

Corresponding author e-mail: David Schvartzman, david.schvartzman@noaa.gov

1. Introduction

The network of Weather Surveillance Radar-1998 Doppler (WSR-88D) consists of 158 high-resolution, S-band, Doppler polarimetric weather radars operated by the U.S. National Weather Service (NWS) (Whiton et al. 1998), the U.S. Air Force, and the Federal Aviation Administration. The WSR-88D surveils the atmosphere by mechanically rotating a parabolic antenna using one of several predefined scanning strategies known as volume coverage patterns (VCP). VCPs for nonclear-air conditions take 4–6 min to complete, which defines the temporal resolution of the WSR-88D network. Forecasters of the NWS rely heavily on data from the WSR-88D to observe weather phenomena, and more specifically to identify potentially severe weather (Andra et al. 2002). Storms may have remarkably different structures and can evolve in unpredictable ways. In particular, severe storms (e.g., supercell thunderstorms or hailstorms) can evolve in a matter of a few minutes (Heinselman et al. 2008) and potentially result in the loss of lives and property. Recent experiments strongly suggest that radar data with high temporal resolution could be beneficial in the warning decision process of NWS forecasters, leading to increased warning lead times for severe storms (Bowden et al. 2015; Heinselman et al. 2012). However, radar data with higher temporal resolution would increase the already large amount of data that forecasters have to analyze.

Forecasters monitor and process data from many different sources during severe weather situations, which can be overwhelming. For example, during a typical tornado outbreak situation, numerous convective supercell storms can form with only a few of them producing tornadoes. This type of severe thunderstorm usually has distinctive radar signatures (hook echo, velocity couplets, etc.) that evolve quickly and attract the forecaster’s attention. Attention is an instinctive biological mechanism that every living animal and human has; it enables us to selectively focus on the incoming stimuli and discard less interesting signals (Mancas et al. 2012). About 80% of the information we receive every day is from our visual system (Li and Gao 2014). The human retina can receive an equivalent of up to 10 billion bits of information per second (Raichle 2010); nevertheless, the cortex has only approximately 20 billion neurons (Shepherd 2003; Koch 2004). Therefore, the amount of information we receive significantly exceeds the amount of information we can actually store in our brains, and therefore we are routinely faced with information overload. This very same problem is exacerbated in weather forecasters who have to adopt a conceptual model of the atmosphere and analyze information from multiple sources (e.g., radar, satellite, surface observations, model output) simultaneously when making warning decisions. In addition to capacity limitations, our brains are constrained by their processing capabilities, and it becomes unattainable to simultaneously perform an efficient analysis of all the visual information (Li and Gao 2014). Even though our brains have these limitations, we are still able to accomplish highly dynamic tasks thanks to our attention system. Through the attention system, we can enormously reduce the amount of information flooding our brains and focus only on important sources of information. Attention can be defined as the allocation of cognitive resources on sources of relevant information. There are many types of attention, and they are usually classified by the way in which they make use of cognitive resources. According to Sohlberg and Mateer (1989), attention types can be classified as focused, sustained, selective, alternating, and divided. We are interested in the selective attention, which is defined as the quality to selectively maintain cognitive resources on a specific conspicuous object or region while ignoring all other competing stimuli (Li and Gao 2014). Focusing attention on relevant sources of information becomes crucial in overwhelming tasks, such as weather forecasting and warning.

A bio-inspired model of saliency (Itti et al. 1998, 2005; Tsotsos et al. 2005) has been applied to several fields (Frintrop et al. 2010; Itti and Koch 2001). For example, Li et al. (2011) used visual saliency to detect interesting regions in images and improve their compression by dynamically adjusting the resolution based on the degree of interest. Maddalena and Petrosino (2008) developed a saliency-based technique that separates foreground and background components for scenes of stationary cameras used in video surveillance applications. In this work, saliency is used to highlight regions with high temporal and spatial information in weather radar reflectivity images. That is, the proposed Weather Radar Spatiotemporal Saliency (WR-STS) uses a computational model of the human’s selective attention system based on information theory that seeks to resemble a forecaster’s visual examination of weather radar reflectivity images. Such a mathematical model cannot capture the forecaster’s complex conceptual model of the atmosphere; however, we postulate that WR-STS could help focus a forecaster’s attention, since the structured and fast-evolving regions highlighted by WR-STS agree with regions of meteorological importance. Automatically identifying salient regions in weather radar images could be useful for several applications. For instance, WR-STS could assist human forecasters throughout the warning decision process as an additional nowcasting-like tool that highlights regions with higher saliency, especially in complex severe weather scenarios. Instead of performing manual observations of all the radar data collected in a volume scan, WR-STS could aid forecasters by eliminating the need to look at some elevation angles, thus increasing the time available for looking at important regions or data from other sources (e.g., weather stations and satellites). Furthermore, it could be applied to adaptive weather sensing (Reinoso-Rondinel et al. 2010; Torres et al. 2016), whereby regions with high saliency are updated more frequently than other less informative regions. These faster updates of quickly evolving, finely structured storm regions could aid in the interpretation of severe weather phenomena and could provide increased confidence in the radar data during the warning decision process.

The rest of the paper is organized as follows. Section 2 presents definitions and concepts associated with visual saliency and its activation function. In section 3, WR-STS is described mathematically, and its characteristics and implementation parameters are discussed. In section 4, data from a severe storm outbreak are processed using WR-STS, and the results are qualitatively analyzed. In section 5, the performance of the proposed metric is analyzed by correlating it with warning polygons issued by NWS forecasters during the development of the event. A quantitative analysis is carried out, and results are drawn from this discussion. Section 6 summarizes the conclusions of this work, reviews the limitations of the model, and provides recommendations for future work.

2. Visual saliency

The existence of an intuitive mechanism in the brain (and a visual map) that can depict conspicuous regions in the field of view of the human visual system was originally proposed by Koch and Ullman (1987). In 1998, Itti and Koch developed the first computational implementation of a bottom-up, task-independent, selective visual attention model (Itti et al. 1998) that is the basis for most saliency models, including WR-STS. Their model has three main steps. First, three different visual features (color, intensity, and orientation) are computed from the input image. Second, a scale decomposition is applied to each feature map, and the outcome is a set of 12 feature maps (3 features, each with four scales). The third step is an across-scale combination followed by an activation (also called normalization) that highlights unique attributes from the feature maps. Last, a weighted linear combination of the three remaining activated-and-scale-combined features is made to arrive at the final spatial saliency map. This initial implementation of a selective attention model was very successful in providing better predictions of human eye fixations than simpler methods (e.g., chance or direct application of entropy), and inspired many research efforts that resulted in several saliency models. Different saliency models involve very different technical approaches, but they are all derived from the central concept of information innovation in some context (Riche et al. 2013). Popular saliency models include the Local and Global Saliency (LGS) by Itti et al. (1998), the Covariance Saliency (CovSal) by Erdem and Erdem (2013), the Graph-Based Visual Saliency (GBVS) by Harel et al. (2007), the Boolean Map–Based Saliency (BMS) by Zhang and Sclaroff (2013), the bottom-up algorithm for global rare feature detection (RARE2012) by Riche et al. (2013), and the Attention Based on Information Maximization (AIM) by Bruce and Tsotsos (2007). There are several technical differences among these methods, but the main difference is in their activation functions. As we shall discuss in more detail in the next section, the purpose of the activation function is to pick out unique attributes from the feature maps. LGS uses a deterministic function as the activation. CovSal uses a covariance-based activation technique to obtain the activated feature maps. GBVS constructs fully connected graphs over each feature map and assigns weights between nodes of the graph. Graphs are then treated as Markov chains to compute the activation maps, and maps are combined across features to obtain the final map. BMS computes saliency using the Boolean map theory of visual attention (Huang and Pashler 2007) with Bayesian-like activation functions. Finally, both RARE2012 and AIM use information theory type of activation functions (i.e., Shannon’s entropy).

Saliency models can be divided into two broad categories based on their applications: bottom-up and top-down models. In bottom-up saliency models, attention is driven by salient stimuli without preassumptions (i.e., memory free), independent from prior knowledge. Bottom-up models use low-level features extracted from the image, such as intensity, contrast, and orientation. Once those features are generated, the models look for rare, contrasted, novel, more informative (less compressible) regions that maximize the amount of visual information. In other words, they all intend to find unusual characteristics in a given context: in the spatial, temporal, or both dimensions. On the other hand, saliency can also be guided by memory-dependent mechanisms with the use of prior or learned knowledge that is ingested into the model. These are called top-down saliency models, and they can dramatically increase the performance due to the prior knowledge incorporated in the model. It could be argued that the features of different weather phenomena in radar images may provide useful prior knowledge in top-down saliency models (e.g., it is more likely to observe severe thunderstorms in areas of high reflectivity). Similarly, task-specific features could be extracted to improve the detection of different types of salient areas (e.g., snow, freezing rain, and even convergence lines). However, the features observed by weather radar can widely vary depending on the radar acquisition parameters (e.g., spatial sampling, temporal resolution, variance of estimates) and the characteristics of the storms (e.g., intensity, orientation, distance from the radar). Thus, this proof-of-concept implementation adopts the simpler bottom-up saliency model; performance improvements based on top-down saliency models are left for future work.

As mentioned before, the application of visual saliency can be used to distribute the finite resources on the most relevant regions of an image. The spatial activation of the feature maps is a key step to accomplish this. Figure 1 (adapted from Itti et al. 1998) depicts this concept through an example in which the visual stimulus contains only a single salient feature: the red line oriented vertically among many horizontally oriented red lines. Each red line, regardless of its orientation, produces a maximum value in the intensity feature extraction. As a result, the intensity map consists of many peaks with the same value. In this case, it is expected that the activation of the intensity map will lead to a result indicating no information. On the contrary, the orientation map has a slightly higher peak in the region around the vertically oriented line. It is expected that the activation function can amplify this peak while suppressing the others. In other words, an ideal activation function would highlight only remarkable peaks from each feature map, picking out the most spatially informative regions. A conceptually similar activation can be performed in the time domain.

Fig. 1.
Fig. 1.

Notion of activation function adapted from Itti et al. (1998). (left) The visual stimulus used as an input for the model. After extracting the features, (top center) the intensity map has multiple equally high peaks, (top right) which after activation produce a flat map. In contrast, (bottom center) the orientation feature map shows a larger peak embedded in slightly smaller peaks, and (bottom right) activation enhances that peak while suppressing all others.

Citation: Journal of Atmospheric and Oceanic Technology 34, 1; 10.1175/JTECH-D-16-0092.1

The computation of spatial and temporal saliency for weather radar reflectivity images is described in the next section.

3. WR-STS

A simplified functional block diagram of the WR-STS computation process is shown in Fig. 2. The blocks of this simplified diagram are described in detail next.

Fig. 2.
Fig. 2.

Functional block diagram outlining the main steps for the computation of WR-STS.

Citation: Journal of Atmospheric and Oceanic Technology 34, 1; 10.1175/JTECH-D-16-0092.1

a. Multiscale decomposition and feature extraction

After interpolating the polar data collected by the radar into a Cartesian coordinate system (see Schvartzman 2015), we apply a multiscale decomposition and a spatial feature extraction as illustrated in step 1 of Fig. 3. Storm-scale and mesoscale weather phenomena can span a large range of spatial scales, with scales as small as 30 m for F0 tornadoes (Brooks 2004) to an extension of more than 100 km for very large thunderstorms or squall lines. The multiscale decomposition is accomplished using the steerable pyramids method (Freeman and Adelson 1991; Simoncelli and Freeman 1995), which performs a linear multiscale, multiorientation image decomposition. In a nutshell, a directional wavelet decomposition of the Fourier transform of the image is used to obtain independent representations of scale and orientation that are translation and rotation invariant. The reader is referred to Mallat (2008) for more details about this technique.

Fig. 3.
Fig. 3.

Illustration of the steps for the computation of WR-STS (reduced number of scales and orientation filters is used for simplicity). First, consecutive radar reflectivity images are scale decomposed, and their intensity, contrast, and orientation features are extracted. Then, a spatial activation function is applied on each scale, and single-feature maps are combined across all scales. Next, the feature maps are fused into spatial saliency maps, which are used to compute the temporal feature. Finally, the four feature maps are combined through a weighted average to produce the WR-STS map.

Citation: Journal of Atmospheric and Oceanic Technology 34, 1; 10.1175/JTECH-D-16-0092.1

The selection of the smallest scale is based on the average size of tornado widths reported on the ground for severe weather events with considerable damage (F2 or higher). Brooks (2004) reported that approximately 50% of F2 tornadoes have a mean width of 500 m. Thus, the smallest scale of this model is 500 m, and subsequent scales are obtained by doubling the size of the previous one. To avoid spatial aliasing, the resolution of the Cartesian radar grid is 250 m. The use of smaller scales might be problematic, since it could result in undesired echoes (clutter, biological scatterers, noise, etc.) being highlighted. It was determined (Schvartzman 2015) that using seven scales ( m, km, km, km, km, km, and km) is sufficient for identifying regions of interest in weather radar reflectivity images. This choice of scales will be supported by the results discussed in section 5.

Once the image is represented at multiple scales (as exemplified in Fig. 3 for two spatial scales), the feature extraction step is performed. Three types of spatial features are extracted at each scan time t, scale , and, if applicable, orientation ; these are denoted by for intensity, for contrast, and for orientation. To simplify the notation and without loss of generality, we assume a constant update time T such that t is an integer multiple of T. In this work, only the field of radar reflectivity is used, which can be considered as a grayscale image. The reflectivity is expressed in dBZ units because the logarithmic scale allows the model to highlight relatively weaker but important features that otherwise might be obscured. The intensity feature for grayscale images is simply the grayscale image matrix normalized in the range of , corresponding to the fixed range of possible reflectivity values, namely, dBZ. The contrast feature is computed by running a 2D window over every pixel of the image, taking a local neighborhood of pixels and computing the standard deviation of the pixel values in the neighborhood. For symmetry purposes, the neighborhood window used here is a disk, and edge effects are handled by replication. For the image with the highest resolution, a 10-pixel-diameter disk is used. For the next scale, the size of the window is reduced by one pixel in each dimension. The same is done subsequently to coarser scales to compute the contrast feature. The example in Fig. 3 shows only the first and last scales and four orientations; however, the WR-STS implementation uses seven scales and 16 orientations (Schvartzman 2015).

The orientation features can be derived from the reflectivity image at each scale by convolving it with Gabor filters oriented in specific directions. These Gabor filters can approximate the receptive field impulse response of orientation-selective neurons in the primary visual cortex (Daugman 1985); therefore, they are particularly suited for detecting features oriented in a particular direction that visually stand out from their surroundings. The convolution (denoted by ) between the scale-decomposed reflectivity images and the Gabor filter can be computed as (Li and Gao 2014)
e1
where is the jth orientation. A Gabor filter is the product of a sinusoid and a two-dimensional Gaussian function; the real and imaginary parts of the complex Gabor filter (GR and GI, respectively) applied to the pixel are defined as
e2
and
e3
where λ is the wavelength. The wavelengths used in WR-STS are and 64, progressively doubling to match the choice of scales. In other words, because the pixel resolution is the same at each scale, the filter kernel must be adjusted to match the corresponding scale. A thorough description and characterization of these filters are provided in Feichtinger and Strohmer (1998).

b. Spatial activation and fusion

Following the scale decomposition and feature extraction, the spatial activation is carried out. As discussed previously, the choice of activation function is key in the performance of WR-STS, since its purpose is to pick only uniquely salient attributes from the feature maps. Because of the uncertainty typically present in weather phenomena, the use of deterministic functions would produce inconsistent results. Statistical methods use covariances as the main tool for activation and are usually very sensitive to contrast (large variances), while information theory activations produce more statistically robust results. The generality of the Shannon information metrics makes this class of functions suitable for weather radar images, since these can evolve into complex configurations that require statistically robust activations.

Shannon’s entropy is defined in terms of probability distributions and has many properties that agree with the intuitive notion of what a measure of information should be. In fact, the entropy is considered to be the self-information of a signal. It can be shown that uniform probability functions have maximum entropy, since the outcome of a realization of the experiment is equiprobable for all the possible values of that random variable. On the other hand, narrower probability density functions are associated with less information, since the outcome of the corresponding experiment is more predictable.

The concept of entropy in a 2D image is exemplified in Fig. 4 using a radar reflectivity image. The region with similar reflectivity values produces a relatively narrow probability density function (top-left box) and therefore can be thought of as having a more predictable spatial structure. On the other hand, in a region with large reflectivity variation, the probability density function (pdf) approximates a uniform distribution (bottom-left box), implying there is more uncertainty (or more information) in that region. For WR-STS, feature activation is carried out by computing the entropy of all the feature maps. That is, at each pixel of a feature map, the pdf is approximated with a 50-bin, unit-summation normalized histogram of the values inside a disk-shaped window (same as the one used for the contrast feature described previously). The entropy is computed as (Cover and Thomas 2006)
e4
where is the normalized histogram of the feature values inside the window.
Fig. 4.
Fig. 4.

Illustration of windowed computation of entropy for a weather radar reflectivity image (or intensity feature ). The entropy is a functional of the distribution and therefore depends only on the estimated probability density function, .

Citation: Journal of Atmospheric and Oceanic Technology 34, 1; 10.1175/JTECH-D-16-0092.1

The activation step produces one single-scale, single-feature saliency map for each feature map: for intensity, for contrast, and for orientation. These are fused across all scales to obtain single-feature saliency maps: for intensity, for contrast, and for orientation (step 2 in Fig. 3). To fuse the scales, first we linearly interpolate the images to the grid with the highest resolution. Then, for a given feature map, the pixel with maximum value is taken across all scales and the result is the single-feature spatial saliency map. Taking the maximum ensures that single-scale salient regions are captured when the multiscale images are aggregated. Single-orientation maps are also activated through this procedure and are then combined across orientations by averaging, since no particular orientation is favored. Finally, the spatial saliency map is obtained by a weighted average of the three single-feature spatial saliency maps as
e5

The single-feature-map weights were fine-tuned in an ad hoc way with the assistance of a NWS forecaster. The forecaster was presented with three different weather events (a snowstorm, a squall line, and a tornadic supercell) and reviewed the reflectivity images from three consecutive scans for each event. Then, with a reasonable justification, the forecaster manually selected the most meteorologically important regions. Based on a qualitative interpretation of the relevance of each feature, we adjusted the weights until the WR-STS maps roughly agreed with the forecaster’s assessment. Whereas this simplistic approach may be subject to interpretation bias, we consider it sufficient for this initial proof-of-concept implementation. Through a more systematic and extensive process involving several forecasters and other weather events, these weights could be further refined. We leave this for future work.

c. Temporal activation

After obtaining the spatial saliency map, the next step is the temporal activation (step 3 in Fig. 3). In this work, mutual information (MI) is proposed as a means to characterize the temporal evolution of weather echoes in radar images. MI extends the notion of entropy and measures the information one random variable contains about another. The mutual information of current and previous spatial saliency maps and is computed as (Cover and Thomas 2006)
e6
where and are the marginal pdfs, and is the joint pdf. As before, the pdfs are approximated by normalized histograms of values in running windows positioned over corresponding pixels in the two saliency maps (i.e., the windows enclose the same geographical area in both maps). The expression is the reduction in the uncertainty of due to the knowledge of . Note that if , then there is no common information between the saliency maps at different times; that is, each saliency map conveys unique information. To increase the robustness of the model, temporal saliency is computed as
e7
where the second term on the right-hand side of this equation is the MI of the current and two previous spatial saliency maps. As intended, this expression produces high values of temporal saliency for low values of MI (and vice versa) because the MI is low in regions with high temporal variations.

d. Spatiotemporal fusion

Although the temporal saliency map uses the spatial saliency maps obtained from the weighted single-feature spatial saliency maps, MI is a functional of the pdfs and thus independent of the weights. Therefore, as a means to fine-tune the performance of WR-STS by allowing independent weighting of the single-feature saliency maps, the spatiotemporal saliency map is computed as a weighted average of , , , and as (step 4 in Fig. 3)
e8
As mentioned before, these weights were experimentally selected to achieve good agreement between regions highlighted by WR-STS and those of meteorological importance defined by a NWS forecaster. The map obtained in the last step is the WR-STS map at time t, which is designed to highlight regions with high spatial and temporal changes. The WR-STS model parameters discussed in this section are summarized in Table 1.
Table 1.

Summary of the model parameters used in WR-STS.

Table 1.

4. Application of WR-STS to a tornado outbreak in central Oklahoma

As reported by the NWS Weather Forecast Office, “a tornado outbreak occurred over parts of northern and central Oklahoma during the day on 24 May 2011, with violent tornadoes devastating several communities. By the end of the day, one EF-5, two EF-4, and two EF-3 tornadoes destroyed buildings, ripped up trees and power poles, and unfortunately, resulted in 11 deaths and 293 injuries” (NWS 2011). This weather event produced a total of 12 tornadoes in Oklahoma. In particular, an EF-5 tornado that developed and touched down in Canadian County became the strongest of them, traveling through north of El Reno, west of Piedmont, and across the north edge of Guthrie. It was on the ground for almost 2 h (2050–2235 UTC) with a total pathlength of 101 km and a maximum width of 1.6 km. Nine people died as a result of this violent tornado and over 180 were injured. The damage path of all of these devastating tornadoes is shown in Fig. 5 (tracks of the less damaging tornadoes where not surveyed and are not displayed in the figure). The longest track is the one that corresponds to the Canadian–Kingfisher–Logan EF-5 (CKL EF-5) tornado. The two damage paths to the south of the CKL EF-5 tornado path correspond to the Grady–McClain–Cleveland EF-4 (GMC EF-4) and the Grady–McClain EF-4 (GM EF-4) tornadoes. These caused the death of one person and injured over 100 others.

Fig. 5.
Fig. 5.

Damage path left by 6 of the 12 tornadoes (the strongest ones) on 24 May 2011. County names mentioned in the text are shown for reference.

Citation: Journal of Atmospheric and Oceanic Technology 34, 1; 10.1175/JTECH-D-16-0092.1

By their nature, these damaging severe storms are intense (strong reflectivity cores) and can evolve quickly in time. It is expected that these types of convective storms exhibit more features than nonconvective storms, and that WR-STS will likely highlight structured, quickly-evolving regions. To confirm this, WR-STS is computed for the 0.5°-elevation reflectivity images, and severe weather warning polygons issued throughout the event are superimposed to assess their correlation. This makes sense because warning polygons indicate regions (of interest) where severe weather is present or is likely to strike in the near future. However, if the lead time of a warning polygon is less than two update times (), then WR-STS and warning polygons may not be completely independent. Still, even in situations where WR-STS and warning polygons are based on common reflectivity data, whereas WR-STS is based on three consecutive reflectivity images only, the determination of warning polygons involves multiple data sources (e.g., radar, satellite, weather stations) and, more importantly, the constantly evolving conceptual model of the forecaster at the time of the warning (Waters 2007). Thus, good correlation between warning polygons and WR-STS can be used to infer the effectiveness with which WR-STS captures regions with meteorologically important echoes. Figure 6 shows reflectivity images spaced about 30 min apart (although WR-STS is computed on scans spaced approximately 5 min apart). The yellow superimposed polygons are severe thunderstorm warnings that were active at the time of the scan, and the red polygons are tornado warnings that were also active at the time. As a visual reference, white contour lines delimiting regions with 0.5 or higher WR-STS values are plotted on top of the reflectivity images. The corresponding WR-STS maps are presented in Fig. 7, with white 30-dBZ reflectivity contour lines plotted on top.

Fig. 6.
Fig. 6.

Radar reflectivity scans of the tornadic storm on 24 May 2011 obtained from the KTLX radar. Active tornado warning polygons (red), severe thunderstorm polygons (yellow), and 0.5 or higher WR-STS contour lines (white) are overlaid on top of the reflectivity data: (a) 1932:07 (b) 1957:54 (c) 2027:43 (d) 2057:33 (e) 2127:20 (f) 2157:04 (g) 2226:48, and (h) 2256:33 UTC.

Citation: Journal of Atmospheric and Oceanic Technology 34, 1; 10.1175/JTECH-D-16-0092.1

Fig. 7.
Fig. 7.

Corresponding WR-STS maps for the reflectivity images in Fig. 6. Active tornado warning polygons (red), severe thunderstorm polygons (yellow), and 30-dBZ reflectivity contour lines (white) are overlaid on top of the WR-STS maps: (a) 1932:07, (b) 1957:54, (c) 2027:43, (d) 2057:33, (e) 2127:20, (f) 2157:04, (g) 2226:48, and (h) 2256:33 UTC.

Citation: Journal of Atmospheric and Oceanic Technology 34, 1; 10.1175/JTECH-D-16-0092.1

At 1932:07 UTC (Figs. 6a and 7a), many convective storm cells developed in western Oklahoma at about 160 km from the KTLX radar (Oklahoma City, near Twin Lakes). Two severe thunderstorm warnings were active at the time, although no tornado warnings had been issued at that point in time. One storm cell was not contained in a severe warning polygon at that time (farthest one northwest). Looking at the WR-STS map, it can be seen that both severe thunderstorm polygons correspond to medium-to-high WR-STS values, while the storm cell outside of the polygons corresponds to low WR-STS values. Notice that since the ground clutter is stationary (i.e., no temporal feature), its WR-STS is usually low (Schvartzman 2015). Later, at 1957:54 UTC, the storms continued to develop and two more severe thunderstorm warnings were issued (Figs. 6b and 7b) while the first two warnings remained active. Notice that even though the storms on the north have intense reflectivity cores, their WR-STS is relatively low compared to the southern storms, since the latter exhibit higher temporal features.

At 2027:43 UTC (Figs. 6c and 7c), two of the most recent severe thunderstorm warnings were still active, and a third one was issued as an update for the farthest storm located to the northwest. More importantly, three tornado warnings were issued and were active at the time of this scan. The storms for which these warnings were issued exhibit strong rotation features. WR-STS assigns very high values to the storm regions within the tornado warning polygons, with a slightly higher value assigned to the southernmost storm with a maximum of 1. Notice that the WR-STS values within severe thunderstorm warnings are considerably lower than those in tornado warnings, corresponding to the notion that tornadic storms produce more structured and quickly evolving features.

Minutes before the data obtained by the scan shown in Fig. 6d, at approximately 2050 UTC, a tornado formed and became the CKL EF-5. Warnings were issued in west-central Oklahoma. Figures 6d and 7d shows that the previous three tornado warnings were still active and a fourth one was issued. WR-STS values are considerably higher in the last tornado warning polygon issued in Canadian County compared to the other three active warnings. In addition, a severe thunderstorm warning was issued for a storm cell to the southwest (outside of Oklahoma), and it is actually not deemed as informative by WR-STS. The reason for this is that the storm was very far from the radar and therefore its features are not apparent.

At 2127:20 UTC (Figs. 6e and 7e), a new tornado warning was issued for the CKL EF-5 tornado that was crossing Canadian County, and several new severe thunderstorm warnings were also issued. Once again, the WR-STS values assigned to weather echoes inside of the tornado warning polygons are noticeably higher than the values for severe thunderstorm warning polygons (which are also relatively high), even though most of the storm cells have comparable reflectivity values. At that moment, the CKL EF-5 tornado was still violent, and another funnel cloud started to form about 65 km to the south. Specifically, the storm that later became the GMC EF-4 tornado (west-central) was strengthening over the border between Caddo and Grady Counties, where WR-STS takes high values (around 0.75). Notice that the northernmost tornado warning polygon does not correspond with high WR-STS values, since this polygon was temporally obsolete at the time of the scan (it was issued over 30 min earlier; see Fig. 7d).

A tornado warning was issued at 2150 UTC for the GMC EF-4 tornado, as can be seen in Figs. 6f and 7f. At 2157:04 UTC, as the CKL EF-5 tornado was weakening over Logan County, WR-STS values also start decreasing, while they increase considerably in the region of the GMC EF-4 tornado that formed a little later, at approximately 2206 UTC. The analysis for Figs. 6g and 7g can be carried out in a similar way. Finally, at 2256:33 UTC (Figs. 6h and 7h), the CKL EF-5 tornado had dissipated completely and the GMC EF-4 tornado was starting to decay. WR-STS indicates a highly informative region over McClain and Cleveland Counties as the tornado crosses over. Because of the distance of the southern storm cells to the radar, many fine features are lost, and WR-STS computes only relatively low values in that region. This is perhaps one of the main limitations of WR-STS, reflecting the inherent limitation of weather radars to resolve small-scale features as storms get farther away.

5. Performance analysis and validation of WR-STS

With the goal of evaluating the statistical performance of WR-STS, we propose the use of archived severe thunderstorm and tornado warning polygons issued by NWS forecasters. These polygons were determined by forecasters for regions in which hazardous weather would likely take place in the near future. Nevertheless, we stress that WR-STS is not designed to anticipate the occurrence of severe weather but to assist in highlighting regions where potentially hazardous weather is currently evolving.

High-reflectivity regions can be intuitively associated with regions of meteorological importance, since severe weather is often characterized by strong reflectivity cores. In fact, WR-STS uses reflectivity (intensity) as one of its spatial features. However, even though the intensity feature is an important spatial component in WR-STS, the other spatial and temporal components add significant value to its skill in highlighting important regions. In other words, regions of interest determined by WR-STS are not just high-reflectivity regions. To show this, the performance analysis is carried out on both WR-STS and radar reflectivities.

For the case study presented in section 4, radar reflectivity and WR-STS values inside and outside the areas determined by severe thunderstorm and tornado warning polygons are recorded for every scan throughout the time series, and normalized histograms (to an area of 1) are computed for both reflectivity and WR-STS (Fig. 8). An examination of these histograms reveals that it may be easier to distinguish regions with meteorologically important echoes using WR-STS, since the normalized histograms of values inside and outside warning polygons are more separated. To quantify the statistical distance between these normalized histograms, we use the total variation distance. Whereas this measure is related to the relative entropy (also known as the Kullback–Leibler divergence), it has the important property of being a distance metric on the space of probability distributions. It is defined as (Sriperumbudur et al. 2009)
e9
where I and O are random variables over the same domain , corresponding to the values inside and outside the warning polygons, respectively (referred to as “in” and “out,” respectively). These random variables have probability mass functions given by and , respectively, where x . Notice that the term inside the summation on the right-hand side represents the absolute difference between probabilities of each random variable, resulting in the same outcome and it is bounded between 0 and 1. For instance, if I and O are statistically very similar, then their densities are almost overlapped and will approach 0. On the other hand, if I and O are statistically very different, there will be little to no overlap between their densities, and will approach 1. In the context of classification, the larger the total variation distance between classes, the better the separation between them. Thus, by comparing the total variation distance of WR-STS and the reflectivity inside and outside the polygons, we can assess their relative skills at highlighting meteorologically important regions. To this end, we proceed to determine the total variation, using normalized histograms of reflectivity and WR-STS.
Fig. 8.
Fig. 8.

Selective ability of WR-STS: (a) normalized histograms (unit area) of reflectivity inside and outside the warning polygons. (b) As in (a), but for WR-STS; (c) ROC curves of reflectivity and WR-STS; and (d) time-composite WR-STS.

Citation: Journal of Atmospheric and Oceanic Technology 34, 1; 10.1175/JTECH-D-16-0092.1

We compute normalized (unit area) histograms of reflectivity and WR-STS to approximate their probability mass functions inside and outside the warning polygons. Denoting the normalized histograms of reflectivity as in and out by and , respectively, we compute . This means that 40.7% of the area below the curves in Fig. 8a is not shared, while 59.3% is. Similarly, denoting the normalized histograms of WR-STS by and , we compute . This indicates that 70.9% of the area below the curves in Fig. 8b is not shared, while 29.1% is. There is a substantial difference between the total variation distance of reflectivity and WR-STS, which confirms that the additional features incorporated into WR-STS make it more skilled than reflectivity only. These normalized histograms are used to compute receiver operating characteristics (ROC) curves for reflectivity and WR-STS, which are shown in Fig. 8c, and to allow for a direct comparison of the relationship between the true and false positives of each technique. As the results show, WR-STS always attains a higher rate of true positives at a lower rate of false positives. For example, for a true positive rate of 90%, WR-STS has a false positive rate of approximately 20%, whereas reflectivity has a false positive rate of 65%. In addition to the normalized histograms and the ROC curves, a time-composite WR-STS map with superimposed warning polygons is shown in Fig. 8d. The map is computed by taking the maximum WR-STS throughout the time series for each resolution volume. Because WR-STS was designed to highlight regions with salient intensity, contrast, orientation, and temporal features, and tornadoes produce these kinds of features in radar reflectivity images, the highlighted path shown in Fig. 8d is, as expected, well correlated with tornado tracks.

It can be inferred from this case study that WR-STS could be used to automatically highlight regions of interest that contain meteorologically important echoes. Whereas the results presented above correspond just to a single severe weather event, they motivate the study of a larger number of cases. Significant severe weather events that occurred in the last few years in the Midwest United States were chosen for this study, where cases after 1 October 2007 were selected because storm-based polygon warnings became operational after that date.

Table 2 presents the basic characteristics of the selected severe weather events. For each event, a time series of reflectivity data from the lowest elevation scan (0.5°) was obtained from the National Climatic Data Center1 (now known as NCEI; radar reflectivity) and the corresponding warning polygons from the NWS severe weather archives.2 The second column in Table 2 indicates the name of the WSR-88D radar site. Dates and periods of the observations are specified in the following two columns. The last column specifies the overall type of severe weather event. It should be noted that the spatial and temporal characteristics of these severe storms are very different. More specifically, there are four isolated supercell cases (1–4), four squall lines with embedded storm cell cases (5–8), and two tornado outbreak cases (9 and 10).

Table 2.

WSR-88D reflectivity data from severe weather events used to evaluate the performance of WR-STS. Radars are as follows: KTLX near Twin Lakes, OK; KFWS near Fort Worth, TX; KHTX near Huntsville, AL; KOAX near Omaha, NE; KLSX near St. Louis, MO; KILX near Lincoln, IL; KLOT near Chicago, IL; and KEAX near Kansas City, MO.

Table 2.

Table 3 provides detailed information about the warnings issued throughout the period, as well as the performance obtained for reflectivity and WR-STS. Polygon count represents the number of thunderstorm warning polygons issued in the period, which were used to compute the performance metrics. Warning avg size (km2) and Warning avg duration (min) express the average surface and average time span of the warnings polygons, respectively. Columns five and six provide an intuitive idea of the intensity of these storms through the average maximum reflectivity (dBZ) and average reflectivity core size (km2), respectively, of the storms encompassed by the warning polygons. Both are taken at an elevation of 0.5°, where a reflectivity threshold of 35 dBZ is used to define the cores (Johnson et al. 1998). Notice that the tornado outbreaks have the largest number of warnings issued. Moreover, they also have the largest average maximum reflectivity and average reflectivity core sizes. The number of warnings issued for squall lines and isolated supercells is variable, but the squall-line cases tend to have larger reflectivity cores and lower average reflectivity values than the isolated supercells due to their large spatial extent. Complex severe weather systems like these are of particular interest in this work since they help reveal the skill of WR-STS.

Table 3.

Warning polygon characteristics and performance metrics for the 10 severe weather cases in Table 2. Polygon count is the number of warning polygons active through the time period, avg. maximum reflectivity (dBZ) is the average maximum reflectivities recorded through the time period, and average core size (km2) is for the reflectivities above 35 dBZ in the 0.5° cut only. and are the total variation distances of the normalized histograms for reflectivity and WR-STS, respectively.

Table 3.

The total variation distance of the normalized histograms is given in columns seven and eight of Table 3. The results show that the total variation distance of WR-STS exceeds that of reflectivity for all the cases. Notice that the difference is always larger than 0.2, which shows that in and out WR-STS histograms are separated by at least 20% more than those corresponding to reflectivity. The mean total variation distance of reflectivity is 39.2%, while the mean for WR-STS is 71.2%. This is strong corroboration to the case study findings that WR-STS is particularly skilled in highlighting regions of meteorological importance characterized by warning polygons. In addition, notice that the lowest total variation distance for WR-STS is 0.5695 (case 5), which corresponds to a tornado outbreak case. This relatively low performance is due to the higher complexity of the storms in this case and the large number of warnings issued in the time period.

As mentioned before, warning polygons are regions determined by NWS forecasters in which severe weather is likely to strike. In this paper, warning polygons were used as ground truth to analyze the performance of WR-STS. However, the use of warning polygons as a validation method has some limitations. First, since the polygons are predicting the occurrence of severe weather, they do not always coincide with the region where severe weather actually takes place. In other words, in a few complex cases, the polygons may be incorrectly placed. Second, because severe weather may develop quickly and because the temporal resolution of the precipitation VCPs is about 5 min, forecasters may miss the event and therefore no polygon is issued in time. According to Barnes et al. (2007), the national false alarm ratio (FAR) for tornado warnings in 2003 was of 0.76, meaning that only one out of every four tornadoes warnings were verified.

Despite these limitations, it can be inferred from this analysis that high WR-STS values are in good agreement with regions of meteorological importance as defined by NWS forecasters. However, WR-STS does not have the ability to anticipate the occurrence of severe weather (as do forecasters), but it could assist them in complex weather scenarios. A computer-assisted human decision-making process could reduce the time spent manually looking at radar data from each elevation angle and increase the time available for looking at the most important storm regions. In turn, having more time to examine those storms could provide more confidence and aid in the warning decision process.

6. Conclusions

This paper explored the first application of a bioinspired model of attention (referred to as saliency) to weather radar reflectivity images. Saliency models have been used in many other fields to model the human attention system to better allocate the limited resources on relevant information. The proposed model—Weather Radar Spatiotemporal Saliency (WR-STS)—accounts for spatial and temporal features present in radar reflectivity images. The spatial features include the intensity, contrast, and orientation at a number of different spatial scales. Aside from displaying regions of high information content, spatial features highlight regions with high meteorological information. Temporal features are obtained by processing time series of spatial-saliency maps using mutual information. In general, we postulate that WR-STS could aid forecasters in focusing their attention by spending more time analyzing regions of meteorological importance, herein defined as weather echoes confined in severe thunderstorm and tornado warning polygons issued by NWS forecasters. It should be noted that, depending on the application, regions of meteorological importance could be defined in different ways (e.g., snowstorms or drylines); we leave this extension of WR-STS for future work.

It was shown through a case study that convective storms, which display distinctive spatial and temporal features, lead to higher WR-STS values. The analysis of a convective severe weather environment in which multiple tornado-producing thunderstorms were present revealed consistency between high WR-STS values and severe thunderstorm and tornado warning polygons issued by NWS forecasters. It was shown that WR-STS generally assigns medium-to-high values to regions in which severe thunderstorm warnings are active. These results were corroborated by analyzing 10 diverse cases using the same methodology. That is, based on a statistical-distance metric, WR-STS was shown to be significantly better than reflectivity at highlighting regions of meteorological importance.

Whereas WR-STS appears to be a promising tool to analyze weather radar images, it is still in its infancy, and this initial proof-of-concept implementation has a few limitations. First, image artifacts, such as beam blockage and anomalous propagation clutter, can cause WR-STS to highlight regions not necessarily of meteorological importance. Second, a relatively high temporal resolution and good azimuthal sampling are needed by this model to better focus on regions of interest. Perhaps one of the most important limitations of WR-STS is related to its diminishing capability as the range from the radar increases. As the range from the radar increases, the radar resolution volumes become larger and many spatial features of severe storms are inherently obscured. This results in a significant reduction of WR-STS values and could lead to missing meteorologically important regions that are far from the radar.

Even though preliminary results are promising, there are a number of research questions that need to be explored before WR-STS could be considered for operational use. First, the proof-of-concept WR-STS uses images from only the lowest elevation scan. Using volumetric data could provide relevant information about higher levels of the atmosphere, and they could aid WR-STS in more accurately narrowing down regions where severe weather phenomena are developing. For example, mesocyclones usually develop at midlevels (Stumpf et al. 1998) and their initiation would generally not be seen at the lower elevations in radar images. Volumetric saliency is an active, ongoing research field and it is in its developing stages (Shen et al. 2016). Second, the model presented in this work uses only radar reflectivity images. This can largely limit the extent to which WR-STS is able to identify features in the radar data, since several important salient features may be present in other radar fields (e.g., mesocyclone signatures in radial velocity images). Incorporating the Doppler moments (radial velocity and spectrum width) and the polarimetric variables could provide the model with more information, making WR-STS more robust. Last, a study involving forecasters using eye-tracking tools (Bowden et al. 2016) could be used to validate the performance of WR-STS more systematically and accurately.

Although the number of cases analyzed is limited, we can conclude that WR-STS has the potential to consistently highlight regions of meteorological interest. In particular, WR-STS could aid in discerning meteorologically important weather echoes during complex severe weather situations, assisting human forecasters in the warning decision process by reducing the time spent examining all available radar data, and increasing the time available for analyzing the most important storm regions.

Acknowledgments

The authors thank Donald Burgess and Katie Bowden for providing comments that improved the manuscript. Funding was provided by NOAA/Office of Oceanic and Atmospheric Research under NOAA–University of Oklahoma Cooperative Agreement NA11OAR4320072, U.S. Department of Commerce.

REFERENCES

  • Andra, D. L., E. M. Quoetone, and W. F. Bunting, 2002: Warning decision making: The relative roles of conceptual models, technology, strategy, and forecaster expertise on 3 May 1999. Wea. Forecasting, 17, 559566, doi:10.1175/1520-0434(2002)017<0559:WDMTRR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Barnes, L. R., E. C. Gruntfest, M. H. Hayden, D. M. Schultz, and C. Benight, 2007: False alarms and close calls: A conceptual model of warning accuracy. Wea. Forecasting, 22, 11401147, doi:10.1175/WAF1031.1.

    • Search Google Scholar
    • Export Citation
  • Bowden, K. A., P. L. Heinselman, D. M. Kingfield, and R. P. Thomas, 2015: Impacts of phased-array radar data on forecaster performance during severe hail and wind events. Wea. Forecasting, 30, 389404, doi:10.1175/WAF-D-14-00101.1.

    • Search Google Scholar
    • Export Citation
  • Bowden, K. A., P. L. Heinselman, and Z. Kang, 2016: Exploring applications of eye-tracking in operational meteorology research. Bull. Amer. Meteor. Soc., doi:10.1175/BAMS-D-15-00148.1, in press.

    • Search Google Scholar
    • Export Citation
  • Brooks, H. E., 2004: On the relationship of tornado path length and width to intensity. Wea. Forecasting, 19, 310319, doi:10.1175/1520-0434(2004)019<0310:OTROTP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Bruce, N., and J. Tsotsos, 2007: Attention based on information maximization. J. Vision, 7, 950, doi:10.1167/7.9.950.

  • Cover, T. M., and J. A. Thomas, 2006: Elements of Information Theory. 2nd ed. John Wiley & Sons, 792 pp.

  • Daugman, J. G., 1985: Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J. Opt. Soc. Amer., 2A, 11601169, doi:10.1364/JOSAA.2.001160.

    • Search Google Scholar
    • Export Citation
  • Erdem, E., and A. Erdem, 2013: Visual saliency estimation by nonlinearly integrating features using region covariances. J. Vision, 13, 11, doi:10.1167/13.4.11.

    • Search Google Scholar
    • Export Citation
  • Feichtinger, H. G., and T. Strohmer, Eds., 1998: Gabor Analysis and Algorithms: Theory and Applications. Applied and Numerical Harmonic Analysis, Birkhäuser Basel, 496 pp., doi:10.1007/978-1-4612-2016-9.

  • Freeman, W. T., and E. H. Adelson, 1991: The design and use of steerable filters. IEEE Trans. Pattern Anal. Mach. Intell., 13, 891906, doi:10.1109/34.93808.

    • Search Google Scholar
    • Export Citation
  • Frintrop, S., E. Rome, and H. I. Christensen, 2010: Computational visual attention systems and their cognitive foundations: A survey. ACM Trans. Appl. Percept., 7, 6, doi:10.1145/1658349.1658355.

    • Search Google Scholar
    • Export Citation
  • Harel, J., C. Koch, and P. Perona, 2007: Graph-based visual saliency. Advances in Neural Information Processing Systems 19: Proceedings of the 2006 Conference, B. Schölkopf, J. Platt, and T. Hofmann, Eds., Neural Information Processing Series, MIT Press, 545–552.

  • Heinselman, P. L., D. L. Priegnitz, K. L. Manross, T. M. Smith, and R. W. Adams, 2008: Rapid sampling of severe storms by the National Weather Radar Testbed Phased Array Radar. Wea. Forecasting, 23, 808824, doi:10.1175/2008WAF2007071.1.

    • Search Google Scholar
    • Export Citation
  • Heinselman, P. L., D. S. LaDue, and H. Lazrus, 2012: Exploring impacts of rapid-scan radar data on NWS warning decisions. Wea. Forecasting, 27 , 10311044, doi:10.1175/WAF-D-11-00145.1.

    • Search Google Scholar
    • Export Citation
  • Huang, L., and H. Pashler, 2007: A Boolean map theory of visual attention. Psychol. Rev., 114, 599, doi:10.1037/0033-295X.114.3.599.

  • Itti, L., and C. Koch, 2001: Computational modelling of visual attention. Nat. Rev. Neurosci., 2, 194203, doi:10.1038/35058500.

  • Itti, L., C. Koch, and E. Niebur, 1998: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell., 20, 12541259, doi:10.1109/34.730558.

    • Search Google Scholar
    • Export Citation
  • Itti, L., G. Rees, and J. Tsotsos, Eds., 2005: Models of bottom-up attention and saliency. Neurobiology of Attention, Academic Press, 576–582.

  • Johnson, J. T., P. L. MacKeen, A. Witt, E. D. Mitchell, G. J. Stumpf, M. D. Eilts, and K. W. Thomas, 1998: The Storm Cell Identification and Tracking algorithm: An enhanced WSR-88D algorithm. Wea. Forecasting, 13, 263276, doi:10.1175/1520-0434(1998)013<0263:TSCIAT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Koch, C., 2004: Biophysics of Computation: Information Processing in Single Neurons. Computational Neuroscience Series, Oxford University Press, 588 pp.

  • Koch, C., and S. Ullman, 1987: Shifts in selective visual attention: Towards the underlying neural circuitry. Matters of Intelligence: Conceptual Structures in Cognitive Neuroscience, L. M. Vaina, Ed., Synthese Library, Vol. 188, Springer, 115–141.

  • Li, J., and W. Gao, Eds., 2014: Visual Saliency Computation: A Machine Learning Perspective. Lecture Notes in Computer Science, Vol. 8408, Springer International Publishing, 240 pp., doi:10.1007/978-3-319-05642-5.

  • Li, Z., S. Qin, and L. Itti, 2011: Visual attention guided bit allocation in video compression. Image Vision Comput., 29, 114, doi:10.1016/j.imavis.2010.07.001.

    • Search Google Scholar
    • Export Citation
  • Maddalena, L., and A. Petrosino, 2008: A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans. Image Process., 17, 11681177, doi:10.1109/TIP.2008.924285.

    • Search Google Scholar
    • Export Citation
  • Mallat, S., 2008: A Wavelet Tour of Signal Processing: The Sparse Way. 3rd ed. Academic Press, 832 pp.

  • Mancas, M., D. De Beul, N. Riche, and X. Siebert, 2012: Human attention modelization and data reduction. Video Compression, A. Punchihewa, Ed., InTech, 103–128, doi:10.5772/34942.

  • NWS, 2011: The May 24, 2011 tornado outbreak in Oklahoma. [Available online at https://www.weather.gov/oun/events-20110524.]

  • Raichle, M. E., 2010: The brain’s dark energy. Sci. Amer., 302, 4449, doi:10.1038/scientificamerican0310-44.

  • Reinoso-Rondinel, R., T.-Y. Yu, and S. Torres, 2010: Multifunction phased-array radar: Time balance scheduler for adaptive weather sensing. J. Atmos. Oceanic Technol., 27, 18541867, doi:10.1175/2010JTECHA1420.1.

    • Search Google Scholar
    • Export Citation
  • Riche, N., M. Mancas, M. Duvinage, M. Mibulumukini, B. Gosselin, and T. Dutoit, 2013: RARE2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis. Signal Process.: Image Commun., 28, 642658, doi:10.1016/j.image.2013.03.009.

    • Search Google Scholar
    • Export Citation
  • Schvartzman, D., 2015: Weather radar spatio-temporal saliency. M.S. thesis, University of Oklahoma, 152 pp.

  • Shen, E., Y. Wang, and S. Li, 2016: Spatiotemporal volume saliency. J. Visualization, 19, 157, doi:10.1007/s12650-015-0293-y.

  • Shepherd, G., Ed., 2003: The Synaptic Organization of the Brain. 5th ed. Oxford University Press, 736 pp.

  • Simoncelli, E. P., and W. T. Freeman, 1995: The steerable pyramid: A flexible architecture for multi-scale derivative computation. Proceedings: International Conference on Imaging Processing, Vol. 3, IEEE, 444–447, doi:10.1109/ICIP.1995.537667.

  • Sohlberg, M. M., and C. A. Mateer, 1989: Introduction to Cognitive Rehabilitation: Theory and Practice. Guilford Press, 414 pp.

  • Sriperumbudur, B. K., K. Fukumizu, A. Gretton, B. Schölkopf, and G. R. G. Lanckriet, 2009: On integral probability metrics, ϕ-divergences and binary classification. arXiv:0901.2698, 18 pp. [Available online at https://arxiv.org/pdf/0901.2698v4.pdf.]

  • Stumpf, G. J., A. Witt, E. D. Mitchell, P. L. Spencer, J. Johnson, M. D. Eilts, K. W. Thomas, and D. W. Burgess, 1998: The National Severe Storms Laboratory mesocyclone detection algorithm for the WSR-88D. Wea. Forecasting, 13, 304326, doi:10.1175/1520-0434(1998)013<0304:TNSSLM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Torres, S. M., and Coauthors, 2016: Adaptive-weather-surveillance and multifunction capabilities of the National Weather Radar Testbed phased array radar. Proc. IEEE, 104, 660672, doi:10.1109/JPROC.2015.2484288.

    • Search Google Scholar
    • Export Citation
  • Tsotsos, J. K., L. Itti, and G. Rees, 2005: A brief and selective history of attention. Neurobiology of Attention, L. Itti, G. Rees, and J. K. Tsotsos, Eds., Academic Press, xxiii–xxxii, doi:10.1016/B978-012375731-9/50003-3.

  • Waters, K. R., 2007: Verification of National Weather Service warnings using geographic information systems. Preprints, 23rd Conf. on IIPS, San Antonio, TX, Amer. Meteor. Soc., 4B.1. [Available online at https://ams.confex.com/ams/87ANNUAL/techprogram/paper_116773.htm.]

  • Whiton, R. C., P. L. Smith, S. G. Bigler, K. E. Wilk, and A. C. Harbuck, 1998: History of operational use of weather radar by U.S. weather services. Part II: Development of operational Doppler weather radars. Wea. Forecasting, 13, 244252, doi:10.1175/1520-0434(1998)013<0244:HOOUOW>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Zhang, J., and S. Sclaroff, 2013: Saliency detection: A Boolean map approach. 2013 IEEE International Conference on Computer Vision (ICCV 2013), IEEE, 153–160, doi:10.1109/ICCV.2013.26.

Save
  • Andra, D. L., E. M. Quoetone, and W. F. Bunting, 2002: Warning decision making: The relative roles of conceptual models, technology, strategy, and forecaster expertise on 3 May 1999. Wea. Forecasting, 17, 559566, doi:10.1175/1520-0434(2002)017<0559:WDMTRR>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Barnes, L. R., E. C. Gruntfest, M. H. Hayden, D. M. Schultz, and C. Benight, 2007: False alarms and close calls: A conceptual model of warning accuracy. Wea. Forecasting, 22, 11401147, doi:10.1175/WAF1031.1.

    • Search Google Scholar
    • Export Citation
  • Bowden, K. A., P. L. Heinselman, D. M. Kingfield, and R. P. Thomas, 2015: Impacts of phased-array radar data on forecaster performance during severe hail and wind events. Wea. Forecasting, 30, 389404, doi:10.1175/WAF-D-14-00101.1.

    • Search Google Scholar
    • Export Citation
  • Bowden, K. A., P. L. Heinselman, and Z. Kang, 2016: Exploring applications of eye-tracking in operational meteorology research. Bull. Amer. Meteor. Soc., doi:10.1175/BAMS-D-15-00148.1, in press.

    • Search Google Scholar
    • Export Citation
  • Brooks, H. E., 2004: On the relationship of tornado path length and width to intensity. Wea. Forecasting, 19, 310319, doi:10.1175/1520-0434(2004)019<0310:OTROTP>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Bruce, N., and J. Tsotsos, 2007: Attention based on information maximization. J. Vision, 7, 950, doi:10.1167/7.9.950.

  • Cover, T. M., and J. A. Thomas, 2006: Elements of Information Theory. 2nd ed. John Wiley & Sons, 792 pp.

  • Daugman, J. G., 1985: Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. J. Opt. Soc. Amer., 2A, 11601169, doi:10.1364/JOSAA.2.001160.

    • Search Google Scholar
    • Export Citation
  • Erdem, E., and A. Erdem, 2013: Visual saliency estimation by nonlinearly integrating features using region covariances. J. Vision, 13, 11, doi:10.1167/13.4.11.

    • Search Google Scholar
    • Export Citation
  • Feichtinger, H. G., and T. Strohmer, Eds., 1998: Gabor Analysis and Algorithms: Theory and Applications. Applied and Numerical Harmonic Analysis, Birkhäuser Basel, 496 pp., doi:10.1007/978-1-4612-2016-9.

  • Freeman, W. T., and E. H. Adelson, 1991: The design and use of steerable filters. IEEE Trans. Pattern Anal. Mach. Intell., 13, 891906, doi:10.1109/34.93808.

    • Search Google Scholar
    • Export Citation
  • Frintrop, S., E. Rome, and H. I. Christensen, 2010: Computational visual attention systems and their cognitive foundations: A survey. ACM Trans. Appl. Percept., 7, 6, doi:10.1145/1658349.1658355.

    • Search Google Scholar
    • Export Citation
  • Harel, J., C. Koch, and P. Perona, 2007: Graph-based visual saliency. Advances in Neural Information Processing Systems 19: Proceedings of the 2006 Conference, B. Schölkopf, J. Platt, and T. Hofmann, Eds., Neural Information Processing Series, MIT Press, 545–552.

  • Heinselman, P. L., D. L. Priegnitz, K. L. Manross, T. M. Smith, and R. W. Adams, 2008: Rapid sampling of severe storms by the National Weather Radar Testbed Phased Array Radar. Wea. Forecasting, 23, 808824, doi:10.1175/2008WAF2007071.1.

    • Search Google Scholar
    • Export Citation
  • Heinselman, P. L., D. S. LaDue, and H. Lazrus, 2012: Exploring impacts of rapid-scan radar data on NWS warning decisions. Wea. Forecasting, 27 , 10311044, doi:10.1175/WAF-D-11-00145.1.

    • Search Google Scholar
    • Export Citation
  • Huang, L., and H. Pashler, 2007: A Boolean map theory of visual attention. Psychol. Rev., 114, 599, doi:10.1037/0033-295X.114.3.599.

  • Itti, L., and C. Koch, 2001: Computational modelling of visual attention. Nat. Rev. Neurosci., 2, 194203, doi:10.1038/35058500.

  • Itti, L., C. Koch, and E. Niebur, 1998: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell., 20, 12541259, doi:10.1109/34.730558.

    • Search Google Scholar
    • Export Citation
  • Itti, L., G. Rees, and J. Tsotsos, Eds., 2005: Models of bottom-up attention and saliency. Neurobiology of Attention, Academic Press, 576–582.

  • Johnson, J. T., P. L. MacKeen, A. Witt, E. D. Mitchell, G. J. Stumpf, M. D. Eilts, and K. W. Thomas, 1998: The Storm Cell Identification and Tracking algorithm: An enhanced WSR-88D algorithm. Wea. Forecasting, 13, 263276, doi:10.1175/1520-0434(1998)013<0263:TSCIAT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Koch, C., 2004: Biophysics of Computation: Information Processing in Single Neurons. Computational Neuroscience Series, Oxford University Press, 588 pp.

  • Koch, C., and S. Ullman, 1987: Shifts in selective visual attention: Towards the underlying neural circuitry. Matters of Intelligence: Conceptual Structures in Cognitive Neuroscience, L. M. Vaina, Ed., Synthese Library, Vol. 188, Springer, 115–141.

  • Li, J., and W. Gao, Eds., 2014: Visual Saliency Computation: A Machine Learning Perspective. Lecture Notes in Computer Science, Vol. 8408, Springer International Publishing, 240 pp., doi:10.1007/978-3-319-05642-5.

  • Li, Z., S. Qin, and L. Itti, 2011: Visual attention guided bit allocation in video compression. Image Vision Comput., 29, 114, doi:10.1016/j.imavis.2010.07.001.

    • Search Google Scholar
    • Export Citation
  • Maddalena, L., and A. Petrosino, 2008: A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans. Image Process., 17, 11681177, doi:10.1109/TIP.2008.924285.

    • Search Google Scholar
    • Export Citation
  • Mallat, S., 2008: A Wavelet Tour of Signal Processing: The Sparse Way. 3rd ed. Academic Press, 832 pp.

  • Mancas, M., D. De Beul, N. Riche, and X. Siebert, 2012: Human attention modelization and data reduction. Video Compression, A. Punchihewa, Ed., InTech, 103–128, doi:10.5772/34942.

  • NWS, 2011: The May 24, 2011 tornado outbreak in Oklahoma. [Available online at https://www.weather.gov/oun/events-20110524.]

  • Raichle, M. E., 2010: The brain’s dark energy. Sci. Amer., 302, 4449, doi:10.1038/scientificamerican0310-44.

  • Reinoso-Rondinel, R., T.-Y. Yu, and S. Torres, 2010: Multifunction phased-array radar: Time balance scheduler for adaptive weather sensing. J. Atmos. Oceanic Technol., 27, 18541867, doi:10.1175/2010JTECHA1420.1.

    • Search Google Scholar
    • Export Citation
  • Riche, N., M. Mancas, M. Duvinage, M. Mibulumukini, B. Gosselin, and T. Dutoit, 2013: RARE2012: A multi-scale rarity-based saliency detection with its comparative statistical analysis. Signal Process.: Image Commun., 28, 642658, doi:10.1016/j.image.2013.03.009.

    • Search Google Scholar
    • Export Citation
  • Schvartzman, D., 2015: Weather radar spatio-temporal saliency. M.S. thesis, University of Oklahoma, 152 pp.

  • Shen, E., Y. Wang, and S. Li, 2016: Spatiotemporal volume saliency. J. Visualization, 19, 157, doi:10.1007/s12650-015-0293-y.

  • Shepherd, G., Ed., 2003: The Synaptic Organization of the Brain. 5th ed. Oxford University Press, 736 pp.

  • Simoncelli, E. P., and W. T. Freeman, 1995: The steerable pyramid: A flexible architecture for multi-scale derivative computation. Proceedings: International Conference on Imaging Processing, Vol. 3, IEEE, 444–447, doi:10.1109/ICIP.1995.537667.

  • Sohlberg, M. M., and C. A. Mateer, 1989: Introduction to Cognitive Rehabilitation: Theory and Practice. Guilford Press, 414 pp.

  • Sriperumbudur, B. K., K. Fukumizu, A. Gretton, B. Schölkopf, and G. R. G. Lanckriet, 2009: On integral probability metrics, ϕ-divergences and binary classification. arXiv:0901.2698, 18 pp. [Available online at https://arxiv.org/pdf/0901.2698v4.pdf.]

  • Stumpf, G. J., A. Witt, E. D. Mitchell, P. L. Spencer, J. Johnson, M. D. Eilts, K. W. Thomas, and D. W. Burgess, 1998: The National Severe Storms Laboratory mesocyclone detection algorithm for the WSR-88D. Wea. Forecasting, 13, 304326, doi:10.1175/1520-0434(1998)013<0304:TNSSLM>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Torres, S. M., and Coauthors, 2016: Adaptive-weather-surveillance and multifunction capabilities of the National Weather Radar Testbed phased array radar. Proc. IEEE, 104, 660672, doi:10.1109/JPROC.2015.2484288.

    • Search Google Scholar
    • Export Citation
  • Tsotsos, J. K., L. Itti, and G. Rees, 2005: A brief and selective history of attention. Neurobiology of Attention, L. Itti, G. Rees, and J. K. Tsotsos, Eds., Academic Press, xxiii–xxxii, doi:10.1016/B978-012375731-9/50003-3.

  • Waters, K. R., 2007: Verification of National Weather Service warnings using geographic information systems. Preprints, 23rd Conf. on IIPS, San Antonio, TX, Amer. Meteor. Soc., 4B.1. [Available online at https://ams.confex.com/ams/87ANNUAL/techprogram/paper_116773.htm.]

  • Whiton, R. C., P. L. Smith, S. G. Bigler, K. E. Wilk, and A. C. Harbuck, 1998: History of operational use of weather radar by U.S. weather services. Part II: Development of operational Doppler weather radars. Wea. Forecasting, 13, 244252, doi:10.1175/1520-0434(1998)013<0244:HOOUOW>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Zhang, J., and S. Sclaroff, 2013: Saliency detection: A Boolean map approach. 2013 IEEE International Conference on Computer Vision (ICCV 2013), IEEE, 153–160, doi:10.1109/ICCV.2013.26.

  • Fig. 1.

    Notion of activation function adapted from Itti et al. (1998). (left) The visual stimulus used as an input for the model. After extracting the features, (top center) the intensity map has multiple equally high peaks, (top right) which after activation produce a flat map. In contrast, (bottom center) the orientation feature map shows a larger peak embedded in slightly smaller peaks, and (bottom right) activation enhances that peak while suppressing all others.

  • Fig. 2.

    Functional block diagram outlining the main steps for the computation of WR-STS.

  • Fig. 3.

    Illustration of the steps for the computation of WR-STS (reduced number of scales and orientation filters is used for simplicity). First, consecutive radar reflectivity images are scale decomposed, and their intensity, contrast, and orientation features are extracted. Then, a spatial activation function is applied on each scale, and single-feature maps are combined across all scales. Next, the feature maps are fused into spatial saliency maps, which are used to compute the temporal feature. Finally, the four feature maps are combined through a weighted average to produce the WR-STS map.

  • Fig. 4.

    Illustration of windowed computation of entropy for a weather radar reflectivity image (or intensity feature ). The entropy is a functional of the distribution and therefore depends only on the estimated probability density function, .

  • Fig. 5.

    Damage path left by 6 of the 12 tornadoes (the strongest ones) on 24 May 2011. County names mentioned in the text are shown for reference.

  • Fig. 6.

    Radar reflectivity scans of the tornadic storm on 24 May 2011 obtained from the KTLX radar. Active tornado warning polygons (red), severe thunderstorm polygons (yellow), and 0.5 or higher WR-STS contour lines (white) are overlaid on top of the reflectivity data: (a) 1932:07 (b) 1957:54 (c) 2027:43 (d) 2057:33 (e) 2127:20 (f) 2157:04 (g) 2226:48, and (h) 2256:33 UTC.

  • Fig. 7.

    Corresponding WR-STS maps for the reflectivity images in Fig. 6. Active tornado warning polygons (red), severe thunderstorm polygons (yellow), and 30-dBZ reflectivity contour lines (white) are overlaid on top of the WR-STS maps: (a) 1932:07, (b) 1957:54, (c) 2027:43, (d) 2057:33, (e) 2127:20, (f) 2157:04, (g) 2226:48, and (h) 2256:33 UTC.

  • Fig. 8.

    Selective ability of WR-STS: (a) normalized histograms (unit area) of reflectivity inside and outside the warning polygons. (b) As in (a), but for WR-STS; (c) ROC curves of reflectivity and WR-STS; and (d) time-composite WR-STS.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 903 521 160
PDF Downloads 378 61 2