Search Results
You are looking at 1 - 10 of 17 items for
- Author or Editor: James Correia Jr. x
- Refine by Access: All Content x
Abstract
Dropsonde observations from the Bow Echo and Mesoscale Convective Vortex Experiment (BAMEX) are used to document the spatiotemporal variability of temperature, moisture, and wind within mesoscale convective systems (MCSs). Onion-type sounding structures are found throughout the stratiform region of MCSs, but the temperature and moisture variability is large. Composite soundings were constructed and statistics of thermodynamic variability were generated within each subregion of the MCS. The calculated air vertical velocity helped identify subsaturated downdrafts. It was found that lapse rates within the cold pool varied markedly throughout the MCS. Layered wet-bulb potential temperature profiles seem to indicate that air within the lowest several kilometers comes from a variety of source regions. It was also found that lapse-rate transitions across the 0°C level were more common than isothermal, melting layers. The authors discuss the implications these findings have and how they can be used to validate future high-resolution numerical simulations of MCSs.
Abstract
Dropsonde observations from the Bow Echo and Mesoscale Convective Vortex Experiment (BAMEX) are used to document the spatiotemporal variability of temperature, moisture, and wind within mesoscale convective systems (MCSs). Onion-type sounding structures are found throughout the stratiform region of MCSs, but the temperature and moisture variability is large. Composite soundings were constructed and statistics of thermodynamic variability were generated within each subregion of the MCS. The calculated air vertical velocity helped identify subsaturated downdrafts. It was found that lapse rates within the cold pool varied markedly throughout the MCS. Layered wet-bulb potential temperature profiles seem to indicate that air within the lowest several kilometers comes from a variety of source regions. It was also found that lapse-rate transitions across the 0°C level were more common than isothermal, melting layers. The authors discuss the implications these findings have and how they can be used to validate future high-resolution numerical simulations of MCSs.
Abstract
A method is detailed that filters mesoscale gravity wave signals from synoptic-level observation data using empirical orthogonal functions (EOFs). Similar EOF analyses have been used to study many oceanographic and meteorological features by allowing the examination of the variance associated with the principal orthogonal components of a time series in both spatial and temporal formats. Generally, EOF components are tied only to the underlying physical phenomena driving the observations when they represent a significantly large portion of the cumulative EOF variance. This work demonstrates a case in which a physically significant gravity wave event is recovered from the synoptic signal using EOF components that represent a small percentage of the total signal variance. In this case this EOF filtering technique appears to offer several advantages over more traditional digital filtering methods; namely, it appears to capture more of the gravity wave amplitude, it requires less preconditioning of the time series data, and it provides filtered solutions at the first time step.
Abstract
A method is detailed that filters mesoscale gravity wave signals from synoptic-level observation data using empirical orthogonal functions (EOFs). Similar EOF analyses have been used to study many oceanographic and meteorological features by allowing the examination of the variance associated with the principal orthogonal components of a time series in both spatial and temporal formats. Generally, EOF components are tied only to the underlying physical phenomena driving the observations when they represent a significantly large portion of the cumulative EOF variance. This work demonstrates a case in which a physically significant gravity wave event is recovered from the synoptic signal using EOF components that represent a small percentage of the total signal variance. In this case this EOF filtering technique appears to offer several advantages over more traditional digital filtering methods; namely, it appears to capture more of the gravity wave amplitude, it requires less preconditioning of the time series data, and it provides filtered solutions at the first time step.
Abstract
Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012.
Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.
Abstract
Tornado warnings are one of the flagship products of the National Weather Service. We update the time series of various metrics of performance in order to provide baselines over the 1986–2016 period for lead time, probability of detection, false alarm ratio, and warning duration. We have used metrics (mean lead time for tornadoes warned in advance, fraction of tornadoes warned in advance) that work in a consistent way across the official changes in policy for warning issuance, as well as across points in time when unofficial changes took place. The mean lead time for tornadoes warned in advance was relatively constant from 1986 to 2011, while the fraction of tornadoes warned in advance increased through about 2006, and the false alarm ratio slowly decreased. The largest changes in performance take place in 2012 when the default warning duration decreased, and there is an apparent increased emphasis on reducing false alarms. As a result, the lead time, probability of detection, and false alarm ratio all decrease in 2012.
Our analysis is based, in large part, on signal detection theory, which separates the quality of the warning system from the threshold for issuing warnings. Threshold changes lead to trade-offs between false alarms and missed detections. Such changes provide further evidence for changes in what the warning system as a whole considers important, as well as highlighting the limitations of measuring performance by looking at metrics independently.
Abstract
Warm season convective system rainfall forecasts remain a particularly difficult forecast challenge. For these events, it is possible that ensemble forecasts would provide helpful information unavailable in a single deterministic forecast. In this study, an intense derecho event accompanied by a well-organized band of heavy rainfall is used to show that for some situations, the predictability of rainfall even within a 12–24-h period is so low that a wide range of simulations using different models, different physical parameterizations, and different initial conditions all fail to provide even a small signal that the event will occur. The failure of a wide range of models and parameterizations to depict the event might suggest inadequate representation of the initial conditions. However, a range of different initial conditions also failed to lead to a well-simulated event, suggesting that some events are unlikely to be predictable with the current observational network, and ensemble guidance for such cases may provide limited additional information useful to a forecaster.
Abstract
Warm season convective system rainfall forecasts remain a particularly difficult forecast challenge. For these events, it is possible that ensemble forecasts would provide helpful information unavailable in a single deterministic forecast. In this study, an intense derecho event accompanied by a well-organized band of heavy rainfall is used to show that for some situations, the predictability of rainfall even within a 12–24-h period is so low that a wide range of simulations using different models, different physical parameterizations, and different initial conditions all fail to provide even a small signal that the event will occur. The failure of a wide range of models and parameterizations to depict the event might suggest inadequate representation of the initial conditions. However, a range of different initial conditions also failed to lead to a well-simulated event, suggesting that some events are unlikely to be predictable with the current observational network, and ensemble guidance for such cases may provide limited additional information useful to a forecaster.
Abstract
The development and propagation of mesoscale convective systems (MCSs) was examined within the Weather Research and Forecasting (WRF) model using the Kain–Fritsch (KF) cumulus parameterization scheme and a modified version of this scheme. Mechanisms that led to propagation in the parameterized MCS are evaluated and compared between the versions of the KF scheme. Sensitivity to the convective time step is identified and explored for its role in scheme behavior. The sensitivity of parameterized convection propagation to microphysical feedback and to the shape and magnitude of the convective heating profile is also explored.
Each version of the KF scheme has a favored calling frequency that alters the scheme’s initiation frequency despite using the same convective trigger function. The authors propose that this behavior results in part from interaction with computational damping in WRF. A propagating convective system develops in simulations with both versions, but the typical flow structures are distorted (elevated ascending rear inflow as opposed to a descending rear inflow jet as is typically observed). The shape and magnitude of the heating profile is found to alter the propagation speed appreciably, even more so than the microphysical feedback. Microphysical feedback has a secondary role in producing realistic flow features via the resolvable-scale model microphysics. Deficiencies associated with the schemes are discussed and improvements are proposed.
Abstract
The development and propagation of mesoscale convective systems (MCSs) was examined within the Weather Research and Forecasting (WRF) model using the Kain–Fritsch (KF) cumulus parameterization scheme and a modified version of this scheme. Mechanisms that led to propagation in the parameterized MCS are evaluated and compared between the versions of the KF scheme. Sensitivity to the convective time step is identified and explored for its role in scheme behavior. The sensitivity of parameterized convection propagation to microphysical feedback and to the shape and magnitude of the convective heating profile is also explored.
Each version of the KF scheme has a favored calling frequency that alters the scheme’s initiation frequency despite using the same convective trigger function. The authors propose that this behavior results in part from interaction with computational damping in WRF. A propagating convective system develops in simulations with both versions, but the typical flow structures are distorted (elevated ascending rear inflow as opposed to a descending rear inflow jet as is typically observed). The shape and magnitude of the heating profile is found to alter the propagation speed appreciably, even more so than the microphysical feedback. Microphysical feedback has a secondary role in producing realistic flow features via the resolvable-scale model microphysics. Deficiencies associated with the schemes are discussed and improvements are proposed.
Abstract
A three-dimensional (in space and time) object identification algorithm is applied to high-resolution forecasts of hourly maximum updraft helicity (UH)—a diagnostic that identifies simulated rotating storms—with the goal of diagnosing the relationship between forecast UH objects and observed tornado pathlengths. UH objects are contiguous swaths of UH exceeding a specified threshold. Including time allows tracks to span multiple hours and entire life cycles of simulated rotating storms. The object algorithm is applied to 3 yr of 36-h forecasts initialized daily from a 4-km grid-spacing version of the Weather Research and Forecasting Model (WRF) run in real time at the National Severe Storms Laboratory (NSSL), and forecasts from the Storm Scale Ensemble Forecast (SSEF) system run by the Center for Analysis and Prediction of Storms for the 2010 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. Methods for visualizing UH object attributes are presented, and the relationship between pathlengths of UH objects and tornadoes for corresponding 18- or 24-h periods is examined. For deterministic NSSL-WRF UH forecasts, the relationship of UH pathlengths to tornadoes was much stronger during spring (March–May) than in summer (June–August). Filtering UH track segments produced by high-based and/or elevated storms improved the UH–tornado pathlength correlations. The best ensemble results were obtained after filtering high-based and/or elevated UH track segments for the 20 cases in April–May 2010, during which correlation coefficients were as high as 0.91. The results indicate that forecast UH pathlengths during spring could be a very skillful predictor for the severity of tornado outbreaks as measured by total pathlength.
Abstract
A three-dimensional (in space and time) object identification algorithm is applied to high-resolution forecasts of hourly maximum updraft helicity (UH)—a diagnostic that identifies simulated rotating storms—with the goal of diagnosing the relationship between forecast UH objects and observed tornado pathlengths. UH objects are contiguous swaths of UH exceeding a specified threshold. Including time allows tracks to span multiple hours and entire life cycles of simulated rotating storms. The object algorithm is applied to 3 yr of 36-h forecasts initialized daily from a 4-km grid-spacing version of the Weather Research and Forecasting Model (WRF) run in real time at the National Severe Storms Laboratory (NSSL), and forecasts from the Storm Scale Ensemble Forecast (SSEF) system run by the Center for Analysis and Prediction of Storms for the 2010 NOAA Hazardous Weather Testbed Spring Forecasting Experiment. Methods for visualizing UH object attributes are presented, and the relationship between pathlengths of UH objects and tornadoes for corresponding 18- or 24-h periods is examined. For deterministic NSSL-WRF UH forecasts, the relationship of UH pathlengths to tornadoes was much stronger during spring (March–May) than in summer (June–August). Filtering UH track segments produced by high-based and/or elevated storms improved the UH–tornado pathlength correlations. The best ensemble results were obtained after filtering high-based and/or elevated UH track segments for the 20 cases in April–May 2010, during which correlation coefficients were as high as 0.91. The results indicate that forecast UH pathlengths during spring could be a very skillful predictor for the severity of tornado outbreaks as measured by total pathlength.
Abstract
This study evaluates forecasts of thermodynamic variables from five convection-allowing configurations of the Weather Research and Forecasting Model (WRF) with the Advanced Research core (WRF-ARW). The forecasts vary only in their planetary boundary layer (PBL) scheme, including three “local” schemes [Mellor–Yamada–Janjić (MYJ), quasi-normal scale elimination (QNSE), and Mellor–Yamada–Nakanishi–Niino (MYNN)] and two schemes that include “nonlocal” mixing [the asymmetric cloud model version 2 (ACM2) and the Yonei University (YSU) scheme]. The forecasts are compared to springtime radiosonde observations upstream from deep convection to gain a better understanding of the thermodynamic characteristics of these PBL schemes in this regime. The morning PBLs are all too cool and dry despite having little bias in PBL depth (except for YSU). In the evening, the local schemes produce shallower PBLs that are often too shallow and too moist compared to nonlocal schemes. However, MYNN is nearly unbiased in PBL depth, moisture, and potential temperature, which is comparable to the background North American Mesoscale model (NAM) forecasts. This result gives confidence in the use of the MYNN scheme in convection-allowing configurations of WRF-ARW to alleviate the typical cool, moist bias of the MYJ scheme in convective boundary layers upstream from convection. The morning cool and dry biases lead to an underprediction of mixed-layer CAPE (MLCAPE) and an overprediction of mixed-layer convective inhibition (MLCIN) at that time in all schemes. MLCAPE and MLCIN forecasts improve in the evening, with MYJ, QNSE, and MYNN having small mean errors, but ACM2 and YSU having a somewhat low bias. Strong observed capping inversions tend to be associated with an underprediction of MLCIN in the evening, as the model profiles are too smooth. MLCAPE tends to be overpredicted (underpredicted) by MYJ and QNSE (MYNN, ACM2, and YSU) when the observed MLCAPE is relatively small (large).
Abstract
This study evaluates forecasts of thermodynamic variables from five convection-allowing configurations of the Weather Research and Forecasting Model (WRF) with the Advanced Research core (WRF-ARW). The forecasts vary only in their planetary boundary layer (PBL) scheme, including three “local” schemes [Mellor–Yamada–Janjić (MYJ), quasi-normal scale elimination (QNSE), and Mellor–Yamada–Nakanishi–Niino (MYNN)] and two schemes that include “nonlocal” mixing [the asymmetric cloud model version 2 (ACM2) and the Yonei University (YSU) scheme]. The forecasts are compared to springtime radiosonde observations upstream from deep convection to gain a better understanding of the thermodynamic characteristics of these PBL schemes in this regime. The morning PBLs are all too cool and dry despite having little bias in PBL depth (except for YSU). In the evening, the local schemes produce shallower PBLs that are often too shallow and too moist compared to nonlocal schemes. However, MYNN is nearly unbiased in PBL depth, moisture, and potential temperature, which is comparable to the background North American Mesoscale model (NAM) forecasts. This result gives confidence in the use of the MYNN scheme in convection-allowing configurations of WRF-ARW to alleviate the typical cool, moist bias of the MYJ scheme in convective boundary layers upstream from convection. The morning cool and dry biases lead to an underprediction of mixed-layer CAPE (MLCAPE) and an overprediction of mixed-layer convective inhibition (MLCIN) at that time in all schemes. MLCAPE and MLCIN forecasts improve in the evening, with MYJ, QNSE, and MYNN having small mean errors, but ACM2 and YSU having a somewhat low bias. Strong observed capping inversions tend to be associated with an underprediction of MLCIN in the evening, as the model profiles are too smooth. MLCAPE tends to be overpredicted (underpredicted) by MYJ and QNSE (MYNN, ACM2, and YSU) when the observed MLCAPE is relatively small (large).
Abstract
During spring 2016 the Probabilistic Hazard Information (PHI) prototype experiment was run in the National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) as part of the Forecasting a Continuum of Environmental Threats (FACETS) program. Nine National Weather Service forecasters were trained to use the web-based PHI prototype tool to produce dynamic PHI for severe weather threats. Archived and real-time weather scenarios were used to test this new paradigm of issuing probabilistic information, rather than deterministic information. The forecasters’ mental workload was evaluated after each scenario using the NASA-Task Load Index (TLX) questionnaire. This study summarizes the analysis results of mental workload experienced by forecasters while using the PHI prototype. Six subdimensions of mental workload: mental demand, physical demand, temporal demand, performance, effort, and frustration were analyzed to derive top contributing factors to workload. Average mental workload was 46.6 (out of 100, standard deviation: 19, range 70.8). Top contributing factors to workload included using automated guidance, PHI object quantity, multiple displays, and formulating probabilities in the new paradigm. Automated guidance provided support to forecasters in maintaining situational awareness and managing increased quantities of threats. The results of this study provided understanding of forecasters’ mental workload and task strategies and developed insights to improve usability of the PHI prototype tool.
Abstract
During spring 2016 the Probabilistic Hazard Information (PHI) prototype experiment was run in the National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) as part of the Forecasting a Continuum of Environmental Threats (FACETS) program. Nine National Weather Service forecasters were trained to use the web-based PHI prototype tool to produce dynamic PHI for severe weather threats. Archived and real-time weather scenarios were used to test this new paradigm of issuing probabilistic information, rather than deterministic information. The forecasters’ mental workload was evaluated after each scenario using the NASA-Task Load Index (TLX) questionnaire. This study summarizes the analysis results of mental workload experienced by forecasters while using the PHI prototype. Six subdimensions of mental workload: mental demand, physical demand, temporal demand, performance, effort, and frustration were analyzed to derive top contributing factors to workload. Average mental workload was 46.6 (out of 100, standard deviation: 19, range 70.8). Top contributing factors to workload included using automated guidance, PHI object quantity, multiple displays, and formulating probabilities in the new paradigm. Automated guidance provided support to forecasters in maintaining situational awareness and managing increased quantities of threats. The results of this study provided understanding of forecasters’ mental workload and task strategies and developed insights to improve usability of the PHI prototype tool.
Abstract
Examining forecasts from the Storm Scale Ensemble Forecast (SSEF) system run by the Center for Analysis and Prediction of Storms for the 2010 NOAA/Hazardous Weather Testbed Spring Forecasting Experiment, recent research diagnosed a strong relationship between the cumulative pathlengths of simulated rotating storms (measured using a three-dimensional object identification algorithm applied to forecast updraft helicity) and the cumulative pathlengths of tornadoes. This paper updates those results by including data from the 2011 SSEF system, and illustrates forecast examples from three major 2011 tornado outbreaks—16 and 27 April, and 24 May—as well as two forecast failure cases from June 2010. Finally, analysis updraft helicity (UH) from 27 April 2011 is computed using a three-dimensional variational data assimilation system to obtain 1.25-km grid-spacing analyses at 5-min intervals and compared to forecast UH from individual SSEF members.
Abstract
Examining forecasts from the Storm Scale Ensemble Forecast (SSEF) system run by the Center for Analysis and Prediction of Storms for the 2010 NOAA/Hazardous Weather Testbed Spring Forecasting Experiment, recent research diagnosed a strong relationship between the cumulative pathlengths of simulated rotating storms (measured using a three-dimensional object identification algorithm applied to forecast updraft helicity) and the cumulative pathlengths of tornadoes. This paper updates those results by including data from the 2011 SSEF system, and illustrates forecast examples from three major 2011 tornado outbreaks—16 and 27 April, and 24 May—as well as two forecast failure cases from June 2010. Finally, analysis updraft helicity (UH) from 27 April 2011 is computed using a three-dimensional variational data assimilation system to obtain 1.25-km grid-spacing analyses at 5-min intervals and compared to forecast UH from individual SSEF members.
Abstract
Output from a real-time high-resolution numerical model is used to generate synthetic infrared satellite imagery. It is shown that this imagery helps to characterize model-simulated large-scale precursors to the formation of deep-convective storms as well as the subsequent development of storm systems. A strategy for using this imagery in the forecasting of severe convective weather is presented. This strategy involves comparing model-simulated precursors to their observed counterparts to help anticipate model errors in the timing and location of storm formation, while using the simulated storm evolution as guidance.
Abstract
Output from a real-time high-resolution numerical model is used to generate synthetic infrared satellite imagery. It is shown that this imagery helps to characterize model-simulated large-scale precursors to the formation of deep-convective storms as well as the subsequent development of storm systems. A strategy for using this imagery in the forecasting of severe convective weather is presented. This strategy involves comparing model-simulated precursors to their observed counterparts to help anticipate model errors in the timing and location of storm formation, while using the simulated storm evolution as guidance.