Abstract
The National Weather Service plays a critical role in alerting the public when dangerous weather occurs. Tornado warnings are one of the most publicly visible products the NWS issues given the large societal impacts tornadoes can have . Understanding the performance of these warnings is crucial for providing adequate warning during tornadic events and improving overall warning performance. This study aims to understand warning performance during the lifetimes of individual storms (specifically in terms of probability of detection and lead time). For example, does probability of detection vary based on if the tornado was the first produced by the storm, or the last? We use tornado outbreak data from 2008–2014, archived NEXRAD radar data, and the NWS verification database to associate each tornado report with a storm object. This approach allows for an analysis of warning performance based on chronological order of tornado occurrence within each storm. Results show that the probability of detection and lead time increase with later tornadoes in the storm; the first tornadoes of each storm are less likely to be warned and on average have less lead time. Probability of detection also decreases overnight, especially for first tornadoes and storms that only produce one tornado. These results are important for understanding how tornado warning performance varies during individual storm lifecycles and how upstream forecast products (e.g., Storm Prediction Center tornado watches, mesoscale discussions, etc.) may increase warning confidence for the first tornado produced by each storm.
Abstract
The National Weather Service plays a critical role in alerting the public when dangerous weather occurs. Tornado warnings are one of the most publicly visible products the NWS issues given the large societal impacts tornadoes can have . Understanding the performance of these warnings is crucial for providing adequate warning during tornadic events and improving overall warning performance. This study aims to understand warning performance during the lifetimes of individual storms (specifically in terms of probability of detection and lead time). For example, does probability of detection vary based on if the tornado was the first produced by the storm, or the last? We use tornado outbreak data from 2008–2014, archived NEXRAD radar data, and the NWS verification database to associate each tornado report with a storm object. This approach allows for an analysis of warning performance based on chronological order of tornado occurrence within each storm. Results show that the probability of detection and lead time increase with later tornadoes in the storm; the first tornadoes of each storm are less likely to be warned and on average have less lead time. Probability of detection also decreases overnight, especially for first tornadoes and storms that only produce one tornado. These results are important for understanding how tornado warning performance varies during individual storm lifecycles and how upstream forecast products (e.g., Storm Prediction Center tornado watches, mesoscale discussions, etc.) may increase warning confidence for the first tornado produced by each storm.
Abstract
This study presents an initial demonstration of assimilating small Uncrewed Aircraft System (sUAS) data into an operational model with a goal to ultimately improve tropical cyclone (TC) analyses and forecasts. The observations, obtained using the Coyote sUAS in Hurricane Maria (2017), were assimilated into the operational Hurricane Weather Research and Forecast system (HWRF) as they could be in operations. Results suggest that the Coyote data can benefit HWRF forecasts. A single-cycle case study produced the best results when the Coyote observations were assimilated at greater horizontal resolution with more relaxed quality control (QC) than comparable flight-level high-density observations currently used in operations. The case study results guided experiments that cycled HWRF for a roughly four-day period that covered all Coyote flights into Maria. The cycled experiment that assimilated the most data improved initial inner-core structure in the analyses and better agreed with other aircraft observations. The average errors in track and intensity decreased in the subsequent forecasts. Intensity forecasts were too weak when no Coyote data was assimilated, and assimilating the Coyote data made the forecasts stronger. Results also suggest that a symmetric distribution of Coyote data around the TC center is necessary to maximize its benefits in the current configuration of operational HWRF. Although the sample size was limited, these experiments provide insight for potential operational use of data from newer sUAS platforms in future TC applications.
Abstract
This study presents an initial demonstration of assimilating small Uncrewed Aircraft System (sUAS) data into an operational model with a goal to ultimately improve tropical cyclone (TC) analyses and forecasts. The observations, obtained using the Coyote sUAS in Hurricane Maria (2017), were assimilated into the operational Hurricane Weather Research and Forecast system (HWRF) as they could be in operations. Results suggest that the Coyote data can benefit HWRF forecasts. A single-cycle case study produced the best results when the Coyote observations were assimilated at greater horizontal resolution with more relaxed quality control (QC) than comparable flight-level high-density observations currently used in operations. The case study results guided experiments that cycled HWRF for a roughly four-day period that covered all Coyote flights into Maria. The cycled experiment that assimilated the most data improved initial inner-core structure in the analyses and better agreed with other aircraft observations. The average errors in track and intensity decreased in the subsequent forecasts. Intensity forecasts were too weak when no Coyote data was assimilated, and assimilating the Coyote data made the forecasts stronger. Results also suggest that a symmetric distribution of Coyote data around the TC center is necessary to maximize its benefits in the current configuration of operational HWRF. Although the sample size was limited, these experiments provide insight for potential operational use of data from newer sUAS platforms in future TC applications.
Abstract
Climate trends indicate that extreme heat events are becoming more common and more severe over time, requiring improved strategies to communicate heat risk and protective actions. However, there exists a disconnect in heat-related communication from experts, who commonly include heat related jargon (i.e., technical language), to decision makers and the general public. The use of jargon has been shown to reduce meaningful engagement with and understanding of messages written by experts. Translating technical language into comprehensible messages that encourage decision makers to take action has been identified as a priority to enable impact-based decision support. Knowing what concepts and terms are perceived as jargon, and why, is a first step to increasing communication effectiveness. With this in mind, we focus on the mental models about extreme heat among two groups of domain experts –those trained in atmospheric science and those trained in emergency management to identify how each group understands terms and concepts about extreme heat. We use a hybrid data collection method of open card sorting and think-aloud interviews to identify how participants conceptualize and categorize terms and concepts related to extreme heat. While we find few differences within the sorted categories, we learn that the processes leading to decisions about the importance of including, or not including, technical information differs by group. The results lead to recommendations and priorities for communicating about extreme heat.
Abstract
Climate trends indicate that extreme heat events are becoming more common and more severe over time, requiring improved strategies to communicate heat risk and protective actions. However, there exists a disconnect in heat-related communication from experts, who commonly include heat related jargon (i.e., technical language), to decision makers and the general public. The use of jargon has been shown to reduce meaningful engagement with and understanding of messages written by experts. Translating technical language into comprehensible messages that encourage decision makers to take action has been identified as a priority to enable impact-based decision support. Knowing what concepts and terms are perceived as jargon, and why, is a first step to increasing communication effectiveness. With this in mind, we focus on the mental models about extreme heat among two groups of domain experts –those trained in atmospheric science and those trained in emergency management to identify how each group understands terms and concepts about extreme heat. We use a hybrid data collection method of open card sorting and think-aloud interviews to identify how participants conceptualize and categorize terms and concepts related to extreme heat. While we find few differences within the sorted categories, we learn that the processes leading to decisions about the importance of including, or not including, technical information differs by group. The results lead to recommendations and priorities for communicating about extreme heat.
Abstract
A radar display is a tool that depicts meteorological data over space and time; therefore, an individual must think spatially and temporally in addition to drawing on their own meteorological knowledge and past weather experiences. We aimed to understand how the construal of situational risks and outcomes influences the perceived usefulness of a radar display and to explore how radar users interpret distance, time, and meteorological attributes using hypothetical scenarios in the Tampa Bay area (Florida). Ultimately, we wanted to understand how and why individuals use weather radar and to discover what makes it a useful tool. To do this, construal level theory and geospatial thinking guided the mixed methods used in this study to investigate four research objectives. Our findings show that radar is used most often by our participants to anticipate what will happen in the near future in their area. Participants described in their own words what they were viewing while using a radar display and reported what hazards they expected at the study location. Many participants associated the occurrence of lightning or strong winds with “red” and “orange” reflectivity values on a radar display. Participants provided valuable insight into what was and was not found useful about certain radar displays. We also found that most participants overestimated the amount of time they would have before precipitation would begin at their location. Overall, weather radar was found to be a very useful tool; however, judging spatial and temporal proximity became difficult when storm motion/direction was not easily identifiable.
Significance Statement
The purpose of this study is to understand how and why individuals use weather radar and to discover what makes radar a useful tool. We were particularly interested to explore how distance and time are thought about when using radar. We found that radar is generally a useful tool for decision-making except when a storm event was stationary. Participants use their personal experiences and knowledge of past weather events when they use a radar display. We also discovered that deciding how much time is available before rain occurs is often overestimated. These findings are helpful to understand how individuals use weather radar to make decisions that may help us to better understand protective action behavior.
Abstract
A radar display is a tool that depicts meteorological data over space and time; therefore, an individual must think spatially and temporally in addition to drawing on their own meteorological knowledge and past weather experiences. We aimed to understand how the construal of situational risks and outcomes influences the perceived usefulness of a radar display and to explore how radar users interpret distance, time, and meteorological attributes using hypothetical scenarios in the Tampa Bay area (Florida). Ultimately, we wanted to understand how and why individuals use weather radar and to discover what makes it a useful tool. To do this, construal level theory and geospatial thinking guided the mixed methods used in this study to investigate four research objectives. Our findings show that radar is used most often by our participants to anticipate what will happen in the near future in their area. Participants described in their own words what they were viewing while using a radar display and reported what hazards they expected at the study location. Many participants associated the occurrence of lightning or strong winds with “red” and “orange” reflectivity values on a radar display. Participants provided valuable insight into what was and was not found useful about certain radar displays. We also found that most participants overestimated the amount of time they would have before precipitation would begin at their location. Overall, weather radar was found to be a very useful tool; however, judging spatial and temporal proximity became difficult when storm motion/direction was not easily identifiable.
Significance Statement
The purpose of this study is to understand how and why individuals use weather radar and to discover what makes radar a useful tool. We were particularly interested to explore how distance and time are thought about when using radar. We found that radar is generally a useful tool for decision-making except when a storm event was stationary. Participants use their personal experiences and knowledge of past weather events when they use a radar display. We also discovered that deciding how much time is available before rain occurs is often overestimated. These findings are helpful to understand how individuals use weather radar to make decisions that may help us to better understand protective action behavior.
Abstract
Recent research has demonstrated a relationship between convectively coupled Kelvin waves (CCKWs) and tropical cyclogenesis, likely due to the influence of CCKWs on the large-scale environment. However, it remains unclear which environmental factors are most important and how they connect to TC genesis processes. Using a 39-year database of African easterly waves (AEWs) to create composites of reanalysis and satellite data, it is shown that genesis may be facilitated by CCKW-driven modifications to convection and moisture. First, stand-alone composites of genesis demonstrate the significant role of environmental preconditioning and convective aggregation. A moist static energy variance budget indicates that convective aggregation during genesis is dominated by feedbacks between convection and longwave radiation. These processes begin over two days prior to genesis, supporting previous observational work. Shifting attention to CCKWs, up to 76% of developing AEWs encounter at least one CCKW in their lifetime. An increase in genesis events following convectively active CCKW phases is found, corroborating earlier studies. A decrease in genesis events following convectively suppressed phases is also identified. Using CCKW-centered composites, we show that the convectively active CCKW phases enhances convection and moisture content in the vicinity of AEWs prior to genesis. Furthermore, enhanced convective activity is the main discriminator between AEW-CCKW interactions that result in genesis versus those that do not. This analysis suggests that CCKWs may influence genesis through environmental preconditioning and radiative-convective feedbacks, among other factors. A secondary finding is that AEW attributes as far east as Central Africa may be predictive of downstream genesis.
Abstract
Recent research has demonstrated a relationship between convectively coupled Kelvin waves (CCKWs) and tropical cyclogenesis, likely due to the influence of CCKWs on the large-scale environment. However, it remains unclear which environmental factors are most important and how they connect to TC genesis processes. Using a 39-year database of African easterly waves (AEWs) to create composites of reanalysis and satellite data, it is shown that genesis may be facilitated by CCKW-driven modifications to convection and moisture. First, stand-alone composites of genesis demonstrate the significant role of environmental preconditioning and convective aggregation. A moist static energy variance budget indicates that convective aggregation during genesis is dominated by feedbacks between convection and longwave radiation. These processes begin over two days prior to genesis, supporting previous observational work. Shifting attention to CCKWs, up to 76% of developing AEWs encounter at least one CCKW in their lifetime. An increase in genesis events following convectively active CCKW phases is found, corroborating earlier studies. A decrease in genesis events following convectively suppressed phases is also identified. Using CCKW-centered composites, we show that the convectively active CCKW phases enhances convection and moisture content in the vicinity of AEWs prior to genesis. Furthermore, enhanced convective activity is the main discriminator between AEW-CCKW interactions that result in genesis versus those that do not. This analysis suggests that CCKWs may influence genesis through environmental preconditioning and radiative-convective feedbacks, among other factors. A secondary finding is that AEW attributes as far east as Central Africa may be predictive of downstream genesis.
Abstract
It is well known that the El Niño-Southern Oscillation (ENSO) can influence the East Asian winter climate by modifying the atmospheric circulation over the western North Pacific (WNP). While the impact on precipitation in southeastern China has been extensively studied, the ENSO signal in surface air temperature (SAT) remains overlooked. In this paper, we identify robust ENSO footprints in the winter daily minimum SAT in southeastern China, with El Niño winters generally accompanied by warmer-than-normal minimum SAT anomalies. In contrast, the responses of maximum SAT are weak and negligible, suggesting a diurnal cycle-dependent ENSO influence. Further analysis indicates that this diurnal cycle dependence stems primarily from the disparate surface radiative heating between day and night induced by the ENSO-related local total cloud cover (TCC) change. The warmer minimum SAT occurring in the early morning of El Niño winters is mainly caused by the enhanced surface downward longwave radiative heating as a result of the TCC increase. However, in the afternoon of El Niño winters, although the anomalous horizontal advection of warm air plays a role, there is surface radiative cooling as the weakening of solar radiation due to TCC reflection overwhelms the increase in downward longwave radiation, which leads to a weakened sensible heat flux and thus has a cooling effect on the SAT. Ultimately, these two processes effectively cancel each other out and together produce insignificant maximum SAT responses. Our conclusions carry important implications for the seasonal-to-interannual winter climate prediction in southeastern China.
Abstract
It is well known that the El Niño-Southern Oscillation (ENSO) can influence the East Asian winter climate by modifying the atmospheric circulation over the western North Pacific (WNP). While the impact on precipitation in southeastern China has been extensively studied, the ENSO signal in surface air temperature (SAT) remains overlooked. In this paper, we identify robust ENSO footprints in the winter daily minimum SAT in southeastern China, with El Niño winters generally accompanied by warmer-than-normal minimum SAT anomalies. In contrast, the responses of maximum SAT are weak and negligible, suggesting a diurnal cycle-dependent ENSO influence. Further analysis indicates that this diurnal cycle dependence stems primarily from the disparate surface radiative heating between day and night induced by the ENSO-related local total cloud cover (TCC) change. The warmer minimum SAT occurring in the early morning of El Niño winters is mainly caused by the enhanced surface downward longwave radiative heating as a result of the TCC increase. However, in the afternoon of El Niño winters, although the anomalous horizontal advection of warm air plays a role, there is surface radiative cooling as the weakening of solar radiation due to TCC reflection overwhelms the increase in downward longwave radiation, which leads to a weakened sensible heat flux and thus has a cooling effect on the SAT. Ultimately, these two processes effectively cancel each other out and together produce insignificant maximum SAT responses. Our conclusions carry important implications for the seasonal-to-interannual winter climate prediction in southeastern China.
Abstract
A considerable part of the skill in decadal forecasts often comes from the forcings, which are present in both initialized and uninitialized model experiments. This makes the added value from initialization difficult to assess. We investigate statistical tests to quantify if initialized forecasts provide skill over the uninitialized experiments. We consider three correlation-based statistics previously used in the literature. The distributions of these statistics under the null hypothesis that initialization has no added values are calculated by a surrogate data method. We present some simple examples and study the statistical power of the tests. We find that there can be large differences in both the values and power for the different statistics. In general, the simple statistic defined as the difference between the skill of the initialized and uninitialized experiments behaves best. However, for all statistics the risk of rejecting the true null hypothesis is too high compared to the nominal value. We compare the three tests on initialized decadal predictions (hindcasts) of near-surface temperature performed with a climate model and find evidence for a significant effect of initializations for small lead times. In contrast, we find only little evidence for a significant effect of initializations for lead times longer than 3 years when the experience from the simple experiments is included in the estimation.
Abstract
A considerable part of the skill in decadal forecasts often comes from the forcings, which are present in both initialized and uninitialized model experiments. This makes the added value from initialization difficult to assess. We investigate statistical tests to quantify if initialized forecasts provide skill over the uninitialized experiments. We consider three correlation-based statistics previously used in the literature. The distributions of these statistics under the null hypothesis that initialization has no added values are calculated by a surrogate data method. We present some simple examples and study the statistical power of the tests. We find that there can be large differences in both the values and power for the different statistics. In general, the simple statistic defined as the difference between the skill of the initialized and uninitialized experiments behaves best. However, for all statistics the risk of rejecting the true null hypothesis is too high compared to the nominal value. We compare the three tests on initialized decadal predictions (hindcasts) of near-surface temperature performed with a climate model and find evidence for a significant effect of initializations for small lead times. In contrast, we find only little evidence for a significant effect of initializations for lead times longer than 3 years when the experience from the simple experiments is included in the estimation.
Abstract
Airborne microphysical measurements of a frontal precipitation event in North China were used to evaluate five microphysics schemes for predicting the bulk properties of ice particles. They are the Morrison and Thompson schemes, which use predetermined categories, the 1-ice- and 2-ice-category configurations of the Predicted Particle Properties (P3) scheme and the Ice-Spheroids Habit Model with Aspect-Ratio Evolution (ISHMAEL) scheme, which model the evolution of particle properties, and the spectral bin fast version (SBM_fast) microphysics scheme within the Weather Research and Forecasting (WRF) model. WRF simulations with these schemes successfully reproduced the observed temperature and the liquid and total water content profiles at corresponding times and locations, allowing for a credible comparison of the predictions of particle properties with the aircraft measurements. The simulated results with the 1-ice-category P3 scheme are in good agreement with the observations for all the particle properties we examined. The 2-ice-category P3 scheme overestimates the spectrum width and underestimates the number concentration, which can be alleviated by reducing the ice collection efficiency. The simulation with the SBM_fast scheme deviates from the observed ice particle size distributions since the mass-diameter relationship of snow-sized particles adopted in this scheme may not be applicable to this stratiform cloud case.
Abstract
Airborne microphysical measurements of a frontal precipitation event in North China were used to evaluate five microphysics schemes for predicting the bulk properties of ice particles. They are the Morrison and Thompson schemes, which use predetermined categories, the 1-ice- and 2-ice-category configurations of the Predicted Particle Properties (P3) scheme and the Ice-Spheroids Habit Model with Aspect-Ratio Evolution (ISHMAEL) scheme, which model the evolution of particle properties, and the spectral bin fast version (SBM_fast) microphysics scheme within the Weather Research and Forecasting (WRF) model. WRF simulations with these schemes successfully reproduced the observed temperature and the liquid and total water content profiles at corresponding times and locations, allowing for a credible comparison of the predictions of particle properties with the aircraft measurements. The simulated results with the 1-ice-category P3 scheme are in good agreement with the observations for all the particle properties we examined. The 2-ice-category P3 scheme overestimates the spectrum width and underestimates the number concentration, which can be alleviated by reducing the ice collection efficiency. The simulation with the SBM_fast scheme deviates from the observed ice particle size distributions since the mass-diameter relationship of snow-sized particles adopted in this scheme may not be applicable to this stratiform cloud case.
Abstract
Further long-term investments in high-quality, research-driven, fit-for-purpose observations of atmospheric composition are needed globally to meet urgent societal needs related to weather, climate, air quality, and other environmental issues. Challenges include maintaining current observing systems in the face of eroding budgets for long-term monitoring and filling the geographical gaps for key constituents needed for sound services and policies. The observing systems can be bolstered through science-for-services applications, by embracing interoperable observation systems and standardized metadata, and ensuring that the data are findable, accessible, interoperable, and reusable. There is an urgent need to move from opportunities-driven one-component observations to more systematic, planned multifunctional infrastructure, where the observational data flow is coupled with Earth system models to serve both operational and research purposes. This approach fosters a community where user experience feeds back into the research components and where mature research results are translated into operational applications. This will lead to faster exploration and exploitation of atmospheric composition information and more impactful applications for science and society. We discuss here the urgent need to (i) achieve global coverage, (ii) harmonize infrastructure operations, (iii) establish focused policies, and (iv) strengthen coordination of atmospheric composition infrastructure.
Abstract
Further long-term investments in high-quality, research-driven, fit-for-purpose observations of atmospheric composition are needed globally to meet urgent societal needs related to weather, climate, air quality, and other environmental issues. Challenges include maintaining current observing systems in the face of eroding budgets for long-term monitoring and filling the geographical gaps for key constituents needed for sound services and policies. The observing systems can be bolstered through science-for-services applications, by embracing interoperable observation systems and standardized metadata, and ensuring that the data are findable, accessible, interoperable, and reusable. There is an urgent need to move from opportunities-driven one-component observations to more systematic, planned multifunctional infrastructure, where the observational data flow is coupled with Earth system models to serve both operational and research purposes. This approach fosters a community where user experience feeds back into the research components and where mature research results are translated into operational applications. This will lead to faster exploration and exploitation of atmospheric composition information and more impactful applications for science and society. We discuss here the urgent need to (i) achieve global coverage, (ii) harmonize infrastructure operations, (iii) establish focused policies, and (iv) strengthen coordination of atmospheric composition infrastructure.