Search Results
You are looking at 1 - 10 of 18 items for
- Author or Editor: Curtis R. Alexander x
- Refine by Access: All Content x
Abstract
A violent supercell tornado passed through the town of Spencer, South Dakota, on the evening of 30 May 1998 producing large gradients in damage severity. The tornado was rated at F4 intensity by damage survey teams. A Doppler On Wheels (DOW) mobile radar followed this tornado and observed the tornado at ranges between 1.7 and 8.0 km during various stages of the tornado's life. The DOW was deployed less than 4.0 km from the town of Spencer between 0134 and 0145 UTC, and during this time period, the tornado passed through Spencer, and peak Doppler velocity measurements exceeded 100 m s−1. Data gathered from the DOW during this time period contained high spatial resolution sample volumes of approximately 34 m × 34 m × 37 m along with frequent volume updates every 45–50 s.
The high-resolution Doppler velocity data gathered from low-level elevation scans, when sample volumes are between 20 and 40 m AGL, are compared to extensive ground and aerial damage surveys performed by the National Weather Service (NWS) and the National Institute of Standards and Technology (NIST). Idealized radial profiles of tangential velocity are computed by fitting a model of an axisymmetric translating vortex to the Doppler radar observations, which compensates for velocity components perpendicular to the radar beam as well as the translational motion of the tornado vortex.
Both the original single-Doppler velocity data and the interpolated velocity fields are compared with damage survey Fujita scale (F-scale) estimates throughout the town of Spencer. This comparison on a structure-by-structure basis revealed that radar-based estimates of the F-scale intensity usually exceeded the damage-survey-based F-scale both inside and outside the town of Spencer. In the town of Spencer, the radar-based wind field revealed two distinct velocity time series inside and outside the passage of the core-flow region. The center of the core-flow region tracked about 50 m farther north than the damage survey indicated because of the asymmetry induced by the 15 m s−1 translational motion of the tornado. The radar consistently measured the strongest winds in the lowest 200 m AGL with the most extreme Doppler velocities residing within 50 m AGL. Alternate measures of tornado wind field intensity that incorporated the effects of the duration of the extreme winds and debris were explored. It is suggested that damage may not be a simple function of peak wind gust and structural integrity, but that the duration of intense winds, directional changes, accelerations, and upwind debris loading may be critical factors.
Abstract
A violent supercell tornado passed through the town of Spencer, South Dakota, on the evening of 30 May 1998 producing large gradients in damage severity. The tornado was rated at F4 intensity by damage survey teams. A Doppler On Wheels (DOW) mobile radar followed this tornado and observed the tornado at ranges between 1.7 and 8.0 km during various stages of the tornado's life. The DOW was deployed less than 4.0 km from the town of Spencer between 0134 and 0145 UTC, and during this time period, the tornado passed through Spencer, and peak Doppler velocity measurements exceeded 100 m s−1. Data gathered from the DOW during this time period contained high spatial resolution sample volumes of approximately 34 m × 34 m × 37 m along with frequent volume updates every 45–50 s.
The high-resolution Doppler velocity data gathered from low-level elevation scans, when sample volumes are between 20 and 40 m AGL, are compared to extensive ground and aerial damage surveys performed by the National Weather Service (NWS) and the National Institute of Standards and Technology (NIST). Idealized radial profiles of tangential velocity are computed by fitting a model of an axisymmetric translating vortex to the Doppler radar observations, which compensates for velocity components perpendicular to the radar beam as well as the translational motion of the tornado vortex.
Both the original single-Doppler velocity data and the interpolated velocity fields are compared with damage survey Fujita scale (F-scale) estimates throughout the town of Spencer. This comparison on a structure-by-structure basis revealed that radar-based estimates of the F-scale intensity usually exceeded the damage-survey-based F-scale both inside and outside the town of Spencer. In the town of Spencer, the radar-based wind field revealed two distinct velocity time series inside and outside the passage of the core-flow region. The center of the core-flow region tracked about 50 m farther north than the damage survey indicated because of the asymmetry induced by the 15 m s−1 translational motion of the tornado. The radar consistently measured the strongest winds in the lowest 200 m AGL with the most extreme Doppler velocities residing within 50 m AGL. Alternate measures of tornado wind field intensity that incorporated the effects of the duration of the extreme winds and debris were explored. It is suggested that damage may not be a simple function of peak wind gust and structural integrity, but that the duration of intense winds, directional changes, accelerations, and upwind debris loading may be critical factors.
Abstract
On the evening of 30 May 1998 atmospheric conditions across southeastern South Dakota led to the development of organized moist convection including several supercells. One such supercell was tracked by both a Weather Surveillance Radar-1988 Doppler (WSR-88D) from Sioux Falls, South Dakota (KFSD), and by a Doppler On Wheels (DOW) mobile radar. This supercell remained isolated for an hour and a half before being overtaken by a developing squall line. During this time period the supercell produced at least one strong and one violent tornado, the latter of which passed through Spencer, South Dakota, despite the absence of strong low-level environmental wind shear. The two tornadoes were observed both visually and with the DOW radar at ranges between 1.7 and 12.9 km. The close proximity to the tornadoes permitted the DOW radar to observe tornado-scale structures on the order of 35 to 100 m, while the nearest WSR-88D only resolved the parent mesocyclone in the supercell. The DOW observations revealed a persistent Doppler velocity couplet and associated ring reflectivity signature at the tip of the hook echo.
The DOW radar data contained tornado strength winds over 35 m s−1 within 100 m AGL approximately 180 s prior to both the first spotter report and visual confirmation of the first tornado associated with this supercell. Following the formation of the second tornado, the DOW radar observations revealed a tornado-strength Doppler velocity couplet within 150 m AGL between two separate tornado tracks determined by a National Weather Service (NWS) damage survey. Based upon the DOW Doppler velocity data it appears that the second and third damage tracks from this supercell are produced from a single tornado.
The time–height evolution of the Doppler velocity couplet spanning both tornadoes revealed a gradual increase in vertical vorticity across each tornado's core region within a few hundred meters AGL from near 0.2 to over 2.0 s−1 over a 45-min period. A corresponding reduction in vertical vorticity was observed aloft especially near 1000 m AGL where vorticity values decreased from near 1.0 to about 0.5 s−1 during this same time interval. The shear across the Doppler velocity couplet appears to undergo strengthening both at the surface and aloft during both tornadoes. An oscillatory fluctuation in the near-surface shear across the tornado core developed during the second tornado, with peak shear values as high as 206 m s−1, Doppler velocities over 106 m s−1, and peak ground-relative wind speeds reaching 118 m s−1. The period of this intensity oscillation appears to be around 120 s and was most prominent just prior to and during the passage of the tornado through Spencer. Coincident with the tornado passage through Spencer was a rapid descending of the reflectivity eye in the core of the tornado. A detailed comparison of surveyed tornado damage and radar-calculated tornado winds in Spencer is discussed in Part II.
Abstract
On the evening of 30 May 1998 atmospheric conditions across southeastern South Dakota led to the development of organized moist convection including several supercells. One such supercell was tracked by both a Weather Surveillance Radar-1988 Doppler (WSR-88D) from Sioux Falls, South Dakota (KFSD), and by a Doppler On Wheels (DOW) mobile radar. This supercell remained isolated for an hour and a half before being overtaken by a developing squall line. During this time period the supercell produced at least one strong and one violent tornado, the latter of which passed through Spencer, South Dakota, despite the absence of strong low-level environmental wind shear. The two tornadoes were observed both visually and with the DOW radar at ranges between 1.7 and 12.9 km. The close proximity to the tornadoes permitted the DOW radar to observe tornado-scale structures on the order of 35 to 100 m, while the nearest WSR-88D only resolved the parent mesocyclone in the supercell. The DOW observations revealed a persistent Doppler velocity couplet and associated ring reflectivity signature at the tip of the hook echo.
The DOW radar data contained tornado strength winds over 35 m s−1 within 100 m AGL approximately 180 s prior to both the first spotter report and visual confirmation of the first tornado associated with this supercell. Following the formation of the second tornado, the DOW radar observations revealed a tornado-strength Doppler velocity couplet within 150 m AGL between two separate tornado tracks determined by a National Weather Service (NWS) damage survey. Based upon the DOW Doppler velocity data it appears that the second and third damage tracks from this supercell are produced from a single tornado.
The time–height evolution of the Doppler velocity couplet spanning both tornadoes revealed a gradual increase in vertical vorticity across each tornado's core region within a few hundred meters AGL from near 0.2 to over 2.0 s−1 over a 45-min period. A corresponding reduction in vertical vorticity was observed aloft especially near 1000 m AGL where vorticity values decreased from near 1.0 to about 0.5 s−1 during this same time interval. The shear across the Doppler velocity couplet appears to undergo strengthening both at the surface and aloft during both tornadoes. An oscillatory fluctuation in the near-surface shear across the tornado core developed during the second tornado, with peak shear values as high as 206 m s−1, Doppler velocities over 106 m s−1, and peak ground-relative wind speeds reaching 118 m s−1. The period of this intensity oscillation appears to be around 120 s and was most prominent just prior to and during the passage of the tornado through Spencer. Coincident with the tornado passage through Spencer was a rapid descending of the reflectivity eye in the core of the tornado. A detailed comparison of surveyed tornado damage and radar-calculated tornado winds in Spencer is discussed in Part II.
Abstract
High-resolution Doppler radar observations of tornadoes reveal a distinctive tornado-scale signature with the following properties: a reflectivity minimum aloft inside the tornado core (described previously as an “eye”), a high-reflectivity tube aloft that is slightly wider than the tornado core, and a tapering of this high-reflectivity tube near the ground. The results of simple one-dimensional and two-dimensional models demonstrate how these characteristics develop. Important processes in the models include centrifugal ejection of hydrometeors and/or debris by the rotating flow and recycling of some objects by the near-surface inflow and updraft.
Doppler radars sample the motion of objects within the tornado rather than the actual airflow. Since objects move at different speeds and along different trajectories than the air, error is introduced into kinematic analyses of tornadoes based on radar observations. In a steady, axisymmetric tornado, objects move outward relative to the air and move more slowly than the air in the tangential direction; in addition, the vertical air-relative speed of an object is less than it is in still air. The differences between air motion and object motion are greater for objects with greater characteristic fall speeds (i.e., larger, denser objects) and can have magnitudes of tens of meters per second. Estimates of these differences for specified object and tornado characteristics can be obtained from an approximation of the one-dimensional model.
Doppler On Wheels observations of the 30 May 1998 Spencer, South Dakota, tornado demonstrate how the apparent tornado structure can change when the radar-scatterer type changes. When the Spencer tornado entered the town and started lofting debris, changes occurred in the Doppler velocity and reflectivity fields that are consistent with an increase in mean scatterer size.
Abstract
High-resolution Doppler radar observations of tornadoes reveal a distinctive tornado-scale signature with the following properties: a reflectivity minimum aloft inside the tornado core (described previously as an “eye”), a high-reflectivity tube aloft that is slightly wider than the tornado core, and a tapering of this high-reflectivity tube near the ground. The results of simple one-dimensional and two-dimensional models demonstrate how these characteristics develop. Important processes in the models include centrifugal ejection of hydrometeors and/or debris by the rotating flow and recycling of some objects by the near-surface inflow and updraft.
Doppler radars sample the motion of objects within the tornado rather than the actual airflow. Since objects move at different speeds and along different trajectories than the air, error is introduced into kinematic analyses of tornadoes based on radar observations. In a steady, axisymmetric tornado, objects move outward relative to the air and move more slowly than the air in the tangential direction; in addition, the vertical air-relative speed of an object is less than it is in still air. The differences between air motion and object motion are greater for objects with greater characteristic fall speeds (i.e., larger, denser objects) and can have magnitudes of tens of meters per second. Estimates of these differences for specified object and tornado characteristics can be obtained from an approximation of the one-dimensional model.
Doppler On Wheels observations of the 30 May 1998 Spencer, South Dakota, tornado demonstrate how the apparent tornado structure can change when the radar-scatterer type changes. When the Spencer tornado entered the town and started lofting debris, changes occurred in the Doppler velocity and reflectivity fields that are consistent with an increase in mean scatterer size.
Abstract
The U.S. operational global data assimilation system provides updated analysis and forecast fields every 6 h, which is not frequent enough to handle the rapid error growth associated with hurricanes or other storms. This motivates development of an hourly updating global data assimilation system, but observational data latency can be a barrier. Two methods are presented to overcome this challenge: “catch-up cycles,” in which a 1-hourly system is reinitialized from a 6-hourly system that has assimilated high-latency observations; and “overlapping assimilation windows,” in which the system is updated hourly with new observations valid in the past 3 h. The performance of these methods is assessed in a near-operational setup using the Global Forecast System by comparing forecasts with in situ observations. At short forecast leads, the overlapping windows method performs comparably to the 6-hourly control in a simplified configuration and outperforms the control in a full-input configuration. In the full-input experiment, the catch-up cycle method performs similarly to the 6-hourly control; reinitializing from the 6-hourly control does not appear to provide a significant benefit. Results suggest that the overlapping windows method performs well in part because of the hourly update cadence, but also because hourly cycling systems can make better use of available observations. The impact of the hourly update relative to the 6-hourly update is most significant during the first forecast day, while impacts on longer-range forecasts were found to be mixed and mostly insignificant. Further effort toward an operational global hourly updating system should be pursued.
Abstract
The U.S. operational global data assimilation system provides updated analysis and forecast fields every 6 h, which is not frequent enough to handle the rapid error growth associated with hurricanes or other storms. This motivates development of an hourly updating global data assimilation system, but observational data latency can be a barrier. Two methods are presented to overcome this challenge: “catch-up cycles,” in which a 1-hourly system is reinitialized from a 6-hourly system that has assimilated high-latency observations; and “overlapping assimilation windows,” in which the system is updated hourly with new observations valid in the past 3 h. The performance of these methods is assessed in a near-operational setup using the Global Forecast System by comparing forecasts with in situ observations. At short forecast leads, the overlapping windows method performs comparably to the 6-hourly control in a simplified configuration and outperforms the control in a full-input configuration. In the full-input experiment, the catch-up cycle method performs similarly to the 6-hourly control; reinitializing from the 6-hourly control does not appear to provide a significant benefit. Results suggest that the overlapping windows method performs well in part because of the hourly update cadence, but also because hourly cycling systems can make better use of available observations. The impact of the hourly update relative to the 6-hourly update is most significant during the first forecast day, while impacts on longer-range forecasts were found to be mixed and mostly insignificant. Further effort toward an operational global hourly updating system should be pursued.
Abstract
In this study, the utility of dimensioned, neighborhood-based, and object-based forecast verification metrics for cloud verification is assessed using output from the experimental High Resolution Rapid Refresh (HRRRx) model over a 1-day period containing different modes of convection. This is accomplished by comparing observed and simulated Geostationary Operational Environmental Satellite (GOES) 10.7-μm brightness temperatures (BTs). Traditional dimensioned metrics such as mean absolute error (MAE) and mean bias error (MBE) were used to assess the overall model accuracy. The MBE showed that the HRRRx BTs for forecast hours 0 and 1 are too warm compared with the observations, indicating a lack of cloud cover, but rapidly become too cold in subsequent hours because of the generation of excessive upper-level cloudiness. Neighborhood and object-based statistics were used to investigate the source of the HRRRx cloud cover errors. The neighborhood statistic fractions skill score (FSS) showed that displacement errors between cloud objects identified in the HRRRx and GOES BTs increased with time. Combined with the MBE, the FSS distinguished when changes in MAE were due to differences in the HRRRx BT bias or displacement in cloud features. The Method for Object-Based Diagnostic Evaluation (MODE) analyzed the similarity between HRRRx and GOES cloud features in shape and location. The similarity was summarized using the newly defined MODE composite score (MCS), an area-weighted calculation using the cloud feature match value from MODE. Combined with the FSS, the MCS indicated if HRRRx forecast error is the result of cloud shape, since the MCS is moderately large when forecast and observation objects are similar in size.
Abstract
In this study, the utility of dimensioned, neighborhood-based, and object-based forecast verification metrics for cloud verification is assessed using output from the experimental High Resolution Rapid Refresh (HRRRx) model over a 1-day period containing different modes of convection. This is accomplished by comparing observed and simulated Geostationary Operational Environmental Satellite (GOES) 10.7-μm brightness temperatures (BTs). Traditional dimensioned metrics such as mean absolute error (MAE) and mean bias error (MBE) were used to assess the overall model accuracy. The MBE showed that the HRRRx BTs for forecast hours 0 and 1 are too warm compared with the observations, indicating a lack of cloud cover, but rapidly become too cold in subsequent hours because of the generation of excessive upper-level cloudiness. Neighborhood and object-based statistics were used to investigate the source of the HRRRx cloud cover errors. The neighborhood statistic fractions skill score (FSS) showed that displacement errors between cloud objects identified in the HRRRx and GOES BTs increased with time. Combined with the MBE, the FSS distinguished when changes in MAE were due to differences in the HRRRx BT bias or displacement in cloud features. The Method for Object-Based Diagnostic Evaluation (MODE) analyzed the similarity between HRRRx and GOES cloud features in shape and location. The similarity was summarized using the newly defined MODE composite score (MCS), an area-weighted calculation using the cloud feature match value from MODE. Combined with the FSS, the MCS indicated if HRRRx forecast error is the result of cloud shape, since the MCS is moderately large when forecast and observation objects are similar in size.
Abstract
A technique for model initialization using three-dimensional radar reflectivity data has been developed and applied within the NOAA 13-km Rapid Refresh (RAP) and 3-km High-Resolution Rapid Refresh (HRRR) regional forecast systems. This technique enabled the first assimilation of radar reflectivity data for operational NOAA forecast models, critical especially for more accurate short-range prediction of convective storms. For the RAP, the technique uses a diabatic digital filter initialization (DFI) procedure originally deployed to control initial inertial gravity wave noise. Within the forward-model integration portion of diabatic DFI, temperature tendencies obtained from the model cloud/precipitation processes are replaced by specified latent heating–based temperature tendencies derived from the three-dimensional radar reflectivity data, where available. To further refine initial conditions for the convection-allowing HRRR model, a similar procedure is used in the HRRR, but without DFI. Both of these procedures, together called the “Radar-LHI” (latent heating initialization) technique, have been essential for initialization of ongoing precipitation systems, especially convective systems, within all NOAA operational versions of the 13-km RAP and 3-km HRRR models extending through the latest implementation upgrade at NCEP in 2020. Application of the latent heat–derived temperature tendency induces a vertical circulation with low-level convergence and upper-level divergence in precipitation systems. Retrospective tests of the Radar-LHI technique show significant improvement in short-range (0–6 h) precipitation system forecasts, as revealed by reflectivity verification scores. Results presented document the impact on HRRR reflectivity forecasts of the radar reflectivity initialization technique applied to the RAP alone, HRRR alone, and both the RAP and HRRR.
Significance Statement
The large forecast uncertainty of convective situations, even at short lead times, coupled with the hazardous weather they produce, makes convective storm prediction one of the most significant short-range forecast challenges confronting the operational numerical weather prediction community. Prediction of heavy precipitation events also requires accurate initialization of precipitation systems. An innovative assimilation technique using radar reflectivity data to initialize NOAA operational weather prediction models is described. This technique, which uses latent heating specified from radar reflectivity (and can accommodate lightning data and other convection/precipitation indicators), was first implemented in 2009 at NOAA/NCEP and continues to be used in 2022 in the NCEP-operational RAP and HRRR models, making it a backbone of the NOAA rapidly updated numerical weather prediction capability.
Abstract
A technique for model initialization using three-dimensional radar reflectivity data has been developed and applied within the NOAA 13-km Rapid Refresh (RAP) and 3-km High-Resolution Rapid Refresh (HRRR) regional forecast systems. This technique enabled the first assimilation of radar reflectivity data for operational NOAA forecast models, critical especially for more accurate short-range prediction of convective storms. For the RAP, the technique uses a diabatic digital filter initialization (DFI) procedure originally deployed to control initial inertial gravity wave noise. Within the forward-model integration portion of diabatic DFI, temperature tendencies obtained from the model cloud/precipitation processes are replaced by specified latent heating–based temperature tendencies derived from the three-dimensional radar reflectivity data, where available. To further refine initial conditions for the convection-allowing HRRR model, a similar procedure is used in the HRRR, but without DFI. Both of these procedures, together called the “Radar-LHI” (latent heating initialization) technique, have been essential for initialization of ongoing precipitation systems, especially convective systems, within all NOAA operational versions of the 13-km RAP and 3-km HRRR models extending through the latest implementation upgrade at NCEP in 2020. Application of the latent heat–derived temperature tendency induces a vertical circulation with low-level convergence and upper-level divergence in precipitation systems. Retrospective tests of the Radar-LHI technique show significant improvement in short-range (0–6 h) precipitation system forecasts, as revealed by reflectivity verification scores. Results presented document the impact on HRRR reflectivity forecasts of the radar reflectivity initialization technique applied to the RAP alone, HRRR alone, and both the RAP and HRRR.
Significance Statement
The large forecast uncertainty of convective situations, even at short lead times, coupled with the hazardous weather they produce, makes convective storm prediction one of the most significant short-range forecast challenges confronting the operational numerical weather prediction community. Prediction of heavy precipitation events also requires accurate initialization of precipitation systems. An innovative assimilation technique using radar reflectivity data to initialize NOAA operational weather prediction models is described. This technique, which uses latent heating specified from radar reflectivity (and can accommodate lightning data and other convection/precipitation indicators), was first implemented in 2009 at NOAA/NCEP and continues to be used in 2022 in the NCEP-operational RAP and HRRR models, making it a backbone of the NOAA rapidly updated numerical weather prediction capability.
Abstract
The Rapid Refresh (RAP) is an hourly updated regional meteorological data assimilation/short-range model forecast system running operationally at NOAA/National Centers for Environmental Prediction (NCEP) using the community Gridpoint Statistical Interpolation analysis system (GSI). This paper documents the application of the GSI three-dimensional hybrid ensemble–variational assimilation option to the RAP high-resolution, hourly cycling system and shows the skill improvements of 1–12-h forecasts of upper-air wind, moisture, and temperature over the purely three-dimensional variational analysis system. Use of perturbation data from an independent global ensemble, the Global Data Assimilation System (GDAS), is demonstrated to be very effective for the regional RAP hybrid assimilation. In this paper, application of the GSI-hybrid assimilation for the RAP is explained. Results from sensitivity experiments are shown to define configurations for the operational RAP version 2, the ratio of static and ensemble background error covariance, and vertical and horizontal localization scales for the operational RAP version 3. Finally, a 1-week RAP experiment from a summer period was performed using a global ensemble from a winter period, suggesting that a significant component of its multivariate covariance structure from the ensemble is independent of time matching between analysis time and ensemble valid time.
Abstract
The Rapid Refresh (RAP) is an hourly updated regional meteorological data assimilation/short-range model forecast system running operationally at NOAA/National Centers for Environmental Prediction (NCEP) using the community Gridpoint Statistical Interpolation analysis system (GSI). This paper documents the application of the GSI three-dimensional hybrid ensemble–variational assimilation option to the RAP high-resolution, hourly cycling system and shows the skill improvements of 1–12-h forecasts of upper-air wind, moisture, and temperature over the purely three-dimensional variational analysis system. Use of perturbation data from an independent global ensemble, the Global Data Assimilation System (GDAS), is demonstrated to be very effective for the regional RAP hybrid assimilation. In this paper, application of the GSI-hybrid assimilation for the RAP is explained. Results from sensitivity experiments are shown to define configurations for the operational RAP version 2, the ratio of static and ensemble background error covariance, and vertical and horizontal localization scales for the operational RAP version 3. Finally, a 1-week RAP experiment from a summer period was performed using a global ensemble from a winter period, suggesting that a significant component of its multivariate covariance structure from the ensemble is independent of time matching between analysis time and ensemble valid time.
Abstract
U.S. National Weather Service (NWS) forecasters assess and communicate hazardous weather risks, including the likelihood of a threat and its impacts. Convection-allowing model (CAM) ensembles offer potential to aid forecasting by depicting atmospheric outcomes, including associated uncertainties, at the refined space and time scales at which hazardous weather often occurs. Little is known, however, about what CAM ensemble information is needed to inform forecasting decisions. To address this knowledge gap, participant observations and semistructured interviews were conducted with NWS forecasters from national centers and local weather forecast offices. Data were collected about forecasters’ roles and their forecasting processes, uses of model guidance and verification information, interpretations of prototype CAM ensemble products, and needs for information from CAM ensembles. Results revealed forecasters’ needs for specific types of CAM ensemble guidance, including a product that combines deterministic and probabilistic output from the ensemble as well as a product that provides map-based guidance about timing of hazardous weather threats. Forecasters also expressed a general need for guidance to help them provide impact-based decision support services. Finally, forecasters conveyed needs for objective model verification information to augment their subjective assessments and for training about using CAM ensemble guidance for operational forecasting. The research was conducted as part of an interdisciplinary research effort that integrated elicitation of forecasters’ CAM ensemble needs with model development efforts, with the aim of illustrating a robust approach for creating information for forecasters that is truly useful and usable.
Abstract
U.S. National Weather Service (NWS) forecasters assess and communicate hazardous weather risks, including the likelihood of a threat and its impacts. Convection-allowing model (CAM) ensembles offer potential to aid forecasting by depicting atmospheric outcomes, including associated uncertainties, at the refined space and time scales at which hazardous weather often occurs. Little is known, however, about what CAM ensemble information is needed to inform forecasting decisions. To address this knowledge gap, participant observations and semistructured interviews were conducted with NWS forecasters from national centers and local weather forecast offices. Data were collected about forecasters’ roles and their forecasting processes, uses of model guidance and verification information, interpretations of prototype CAM ensemble products, and needs for information from CAM ensembles. Results revealed forecasters’ needs for specific types of CAM ensemble guidance, including a product that combines deterministic and probabilistic output from the ensemble as well as a product that provides map-based guidance about timing of hazardous weather threats. Forecasters also expressed a general need for guidance to help them provide impact-based decision support services. Finally, forecasters conveyed needs for objective model verification information to augment their subjective assessments and for training about using CAM ensemble guidance for operational forecasting. The research was conducted as part of an interdisciplinary research effort that integrated elicitation of forecasters’ CAM ensemble needs with model development efforts, with the aim of illustrating a robust approach for creating information for forecasters that is truly useful and usable.
Abstract
In this study, object-based verification using the method for object-based diagnostic evaluation (MODE) is used to assess the accuracy of cloud-cover forecasts from the experimental High-Resolution Rapid Refresh (HRRRx) model during the warm and cool seasons. This is accomplished by comparing cloud objects identified by MODE in observed and simulated Geostationary Operational Environmental Satellite 10.7-μm brightness temperatures for August 2015 and January 2016. The analysis revealed that more cloud objects and a more pronounced diurnal cycle occurred during August, with larger object sizes observed in January because of the prevalence of synoptic-scale cloud features. With the exception of the 0-h analyses, the forecasts contained fewer cloud objects than were observed. HRRRx forecast accuracy is assessed using two methods: traditional verification, which compares the locations of grid points identified as observation and forecast objects, and the MODE composite score, an area-weighted calculation using the object-pair interest values computed by MODE. The 1-h forecasts for both August and January were the most accurate for their respective months. Inspection of the individual MODE attribute interest scores showed that, even though displacement errors between the forecast and observation objects increased between the 0-h analyses and 1-h forecasts, the forecasts were more accurate than the analyses because the sizes of the largest cloud objects more closely matched the observations. The 1-h forecasts from August were found to be more accurate than those during January because the spatial displacement between the cloud objects was smaller and the forecast objects better represented the size of the observation objects.
Abstract
In this study, object-based verification using the method for object-based diagnostic evaluation (MODE) is used to assess the accuracy of cloud-cover forecasts from the experimental High-Resolution Rapid Refresh (HRRRx) model during the warm and cool seasons. This is accomplished by comparing cloud objects identified by MODE in observed and simulated Geostationary Operational Environmental Satellite 10.7-μm brightness temperatures for August 2015 and January 2016. The analysis revealed that more cloud objects and a more pronounced diurnal cycle occurred during August, with larger object sizes observed in January because of the prevalence of synoptic-scale cloud features. With the exception of the 0-h analyses, the forecasts contained fewer cloud objects than were observed. HRRRx forecast accuracy is assessed using two methods: traditional verification, which compares the locations of grid points identified as observation and forecast objects, and the MODE composite score, an area-weighted calculation using the object-pair interest values computed by MODE. The 1-h forecasts for both August and January were the most accurate for their respective months. Inspection of the individual MODE attribute interest scores showed that, even though displacement errors between the forecast and observation objects increased between the 0-h analyses and 1-h forecasts, the forecasts were more accurate than the analyses because the sizes of the largest cloud objects more closely matched the observations. The 1-h forecasts from August were found to be more accurate than those during January because the spatial displacement between the cloud objects was smaller and the forecast objects better represented the size of the observation objects.
Abstract
Verification methods for convection-allowing models (CAMs) should consider the finescale spatial and temporal detail provided by CAMs, and including both neighborhood and object-based methods can account for displaced features that may still provide useful information. This work explores both contingency table–based verification techniques and object-based verification techniques as they relate to forecasts of severe convection. Two key fields in severe weather forecasting are investigated: updraft helicity (UH) and simulated composite reflectivity. UH is used to generate severe weather probabilities called surrogate severe fields, which have two tunable parameters: the UH threshold and the smoothing level. Probabilities computed using the UH threshold and smoothing level that give the best area under the receiver operating curve result in very high probabilities, while optimizing the parameters based on the Brier score reliability component results in much lower probabilities. Subjective ratings from participants in the 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (SFE) provide a complementary evaluation source. This work compares the verification methodologies in the context of three CAMs using the Finite-Volume Cubed-Sphere Dynamical Core (FV3), which will be the foundation of the U.S. Unified Forecast System (UFS). Three agencies ran FV3-based CAMs during the five-week 2018 SFE. These FV3-based CAMs are verified alongside a current operational CAM, the High-Resolution Rapid Refresh version 3 (HRRRv3). The HRRR is planned to eventually use the FV3 dynamical core as part of the UFS; as such evaluations relative to current HRRR configurations are imperative to maintaining high forecast quality and informing future implementation decisions.
Abstract
Verification methods for convection-allowing models (CAMs) should consider the finescale spatial and temporal detail provided by CAMs, and including both neighborhood and object-based methods can account for displaced features that may still provide useful information. This work explores both contingency table–based verification techniques and object-based verification techniques as they relate to forecasts of severe convection. Two key fields in severe weather forecasting are investigated: updraft helicity (UH) and simulated composite reflectivity. UH is used to generate severe weather probabilities called surrogate severe fields, which have two tunable parameters: the UH threshold and the smoothing level. Probabilities computed using the UH threshold and smoothing level that give the best area under the receiver operating curve result in very high probabilities, while optimizing the parameters based on the Brier score reliability component results in much lower probabilities. Subjective ratings from participants in the 2018 NOAA Hazardous Weather Testbed Spring Forecasting Experiment (SFE) provide a complementary evaluation source. This work compares the verification methodologies in the context of three CAMs using the Finite-Volume Cubed-Sphere Dynamical Core (FV3), which will be the foundation of the U.S. Unified Forecast System (UFS). Three agencies ran FV3-based CAMs during the five-week 2018 SFE. These FV3-based CAMs are verified alongside a current operational CAM, the High-Resolution Rapid Refresh version 3 (HRRRv3). The HRRR is planned to eventually use the FV3 dynamical core as part of the UFS; as such evaluations relative to current HRRR configurations are imperative to maintaining high forecast quality and informing future implementation decisions.