Search Results
You are looking at 41 - 50 of 50 items for
- Author or Editor: Morris Weisman x
- Refine by Access: All Content x
Abstract
Ensembles provide an opportunity to greatly improve short-term prediction of local weather hazards, yet generating reliable predictions remain a significant challenge. In particular, convection-permitting ensemble forecast systems (CPEFSs) have persistent problems with underdispersion. Representing initial and or lateral boundary condition uncertainty along with forecast model error provides a foundation for building a more dependable CPEFS, but the best practice for ensemble system design is not well established.
Several configurations of CPEFSs are examined where ensemble forecasts are nested within a larger domain, drawing initial conditions from a downscaled, continuously cycled, ensemble data assimilation system that provides state-dependent initial condition uncertainty. The control ensemble forecast, with initial condition uncertainty only, is skillful but underdispersive. To improve the reliability of the ensemble forecasts, the control ensemble is supplemented with 1) perturbed lateral boundary conditions; or, model error representation using either 2) stochastic kinetic energy backscatter or 3) stochastically perturbed parameterization tendencies. Forecasts are evaluated against stage IV accumulated precipitation analyses and radiosonde observations. Perturbed ensemble forecasts are also compared to the control forecast to assess the relative impact from adding forecast perturbations. For precipitation forecasts, all perturbation approaches improve ensemble reliability relative to the control CPEFS. Deterministic ensemble member forecast skill, verified against radiosonde observations, decreases when forecast perturbations are added, while ensemble mean forecasts remain similarly skillful to the control.
Abstract
Ensembles provide an opportunity to greatly improve short-term prediction of local weather hazards, yet generating reliable predictions remain a significant challenge. In particular, convection-permitting ensemble forecast systems (CPEFSs) have persistent problems with underdispersion. Representing initial and or lateral boundary condition uncertainty along with forecast model error provides a foundation for building a more dependable CPEFS, but the best practice for ensemble system design is not well established.
Several configurations of CPEFSs are examined where ensemble forecasts are nested within a larger domain, drawing initial conditions from a downscaled, continuously cycled, ensemble data assimilation system that provides state-dependent initial condition uncertainty. The control ensemble forecast, with initial condition uncertainty only, is skillful but underdispersive. To improve the reliability of the ensemble forecasts, the control ensemble is supplemented with 1) perturbed lateral boundary conditions; or, model error representation using either 2) stochastic kinetic energy backscatter or 3) stochastically perturbed parameterization tendencies. Forecasts are evaluated against stage IV accumulated precipitation analyses and radiosonde observations. Perturbed ensemble forecasts are also compared to the control forecast to assess the relative impact from adding forecast perturbations. For precipitation forecasts, all perturbation approaches improve ensemble reliability relative to the control CPEFS. Deterministic ensemble member forecast skill, verified against radiosonde observations, decreases when forecast perturbations are added, while ensemble mean forecasts remain similarly skillful to the control.
Abstract
Three diagnostic fields were examined to assess their ability to act as surrogates for tornadoes in a convection-allowing ensemble system run during the spring of 2015. The diagnostics included midlevel (2–5 km AGL) updraft helicity (UH25), low-level (0–3 km AGL) updraft helicity (UH03), and low-level (1 km AGL) vertical relative vorticity (RVORT1). RVORT1 was used as a direct measure of low-level rotation strength. Each storm’s RVORT1 magnitude and near-storm environment properties were extracted from each hour’s forecasts using an object-based approach. The near-storm environments of storm objects with large magnitudes of RVORT1 were very similar to the environments identified as conducive for the development of tornadic supercells in previous proximity sounding-based studies (e.g., low lifted condensation levels and strong low-level shear). This motivated the use of RVORT1 as a direct surrogate for tornadoes, without the need to filter forecasts with environmental information. The relationship between UH25 and UH03 was also explored among the simulated storms; UH03 only exceeded UH25 in storms occurring within low-CAPE/high-shear environments, while UH03 rarely exceeded UH25 in traditional supercell environments. Next-day ensemble surrogate severe probability forecasts (E-SSPFs) for tornadoes were generated using these diagnostics for 92 forecasts, with thresholds based on the number of observed tornado reports. E-SSPFs for tornadoes using RVORT1 and UH03 were more skillful than E-SSPFs using UH25. The UH25 E-SSPFs possessed little skill, regardless of threshold or smoothing length scale. All E-SSPFs suffered from poor sharpness at skillful scales, with few forecast probabilities greater than 40%.
Abstract
Three diagnostic fields were examined to assess their ability to act as surrogates for tornadoes in a convection-allowing ensemble system run during the spring of 2015. The diagnostics included midlevel (2–5 km AGL) updraft helicity (UH25), low-level (0–3 km AGL) updraft helicity (UH03), and low-level (1 km AGL) vertical relative vorticity (RVORT1). RVORT1 was used as a direct measure of low-level rotation strength. Each storm’s RVORT1 magnitude and near-storm environment properties were extracted from each hour’s forecasts using an object-based approach. The near-storm environments of storm objects with large magnitudes of RVORT1 were very similar to the environments identified as conducive for the development of tornadic supercells in previous proximity sounding-based studies (e.g., low lifted condensation levels and strong low-level shear). This motivated the use of RVORT1 as a direct surrogate for tornadoes, without the need to filter forecasts with environmental information. The relationship between UH25 and UH03 was also explored among the simulated storms; UH03 only exceeded UH25 in storms occurring within low-CAPE/high-shear environments, while UH03 rarely exceeded UH25 in traditional supercell environments. Next-day ensemble surrogate severe probability forecasts (E-SSPFs) for tornadoes were generated using these diagnostics for 92 forecasts, with thresholds based on the number of observed tornado reports. E-SSPFs for tornadoes using RVORT1 and UH03 were more skillful than E-SSPFs using UH25. The UH25 E-SSPFs possessed little skill, regardless of threshold or smoothing length scale. All E-SSPFs suffered from poor sharpness at skillful scales, with few forecast probabilities greater than 40%.
Abstract
In May and June 2013, the National Center for Atmospheric Research produced real-time 48-h convection-allowing ensemble forecasts at 3-km horizontal grid spacing using the Weather Research and Forecasting (WRF) Model in support of the Mesoscale Predictability Experiment field program. The ensemble forecasts were initialized twice daily at 0000 and 1200 UTC from analysis members of a continuously cycling, limited-area, mesoscale (15 km) ensemble Kalman filter (EnKF) data assimilation system and evaluated with a focus on precipitation and severe weather guidance. Deterministic WRF Model forecasts initialized from GFS analyses were also examined. Subjectively, the ensemble forecasts often produced areas of intense convection over regions where severe weather was observed. Objective statistics confirmed these subjective impressions and indicated that the ensemble was skillful at predicting precipitation and severe weather events. Forecasts initialized at 1200 UTC were more skillful regarding precipitation and severe weather placement than forecasts initialized 12 h earlier at 0000 UTC, and the ensemble forecasts were typically more skillful than GFS-initialized forecasts. At times, 0000 UTC GFS-initialized forecasts had temporal distributions of domain-average rainfall closer to observations than EnKF-initialized forecasts. However, particularly when GFS analyses initialized WRF Model forecasts, 1200 UTC forecasts produced more rainfall during the first diurnal maximum than 0000 UTC forecasts. This behavior was mostly attributed to WRF Model initialization of clouds and moist physical processes. The success of these real-time ensemble forecasts demonstrates the feasibility of using limited-area continuously cycling EnKFs as a method to initialize convection-allowing ensemble forecasts, and future real-time high-resolution ensemble development leveraging EnKFs seems justified.
Abstract
In May and June 2013, the National Center for Atmospheric Research produced real-time 48-h convection-allowing ensemble forecasts at 3-km horizontal grid spacing using the Weather Research and Forecasting (WRF) Model in support of the Mesoscale Predictability Experiment field program. The ensemble forecasts were initialized twice daily at 0000 and 1200 UTC from analysis members of a continuously cycling, limited-area, mesoscale (15 km) ensemble Kalman filter (EnKF) data assimilation system and evaluated with a focus on precipitation and severe weather guidance. Deterministic WRF Model forecasts initialized from GFS analyses were also examined. Subjectively, the ensemble forecasts often produced areas of intense convection over regions where severe weather was observed. Objective statistics confirmed these subjective impressions and indicated that the ensemble was skillful at predicting precipitation and severe weather events. Forecasts initialized at 1200 UTC were more skillful regarding precipitation and severe weather placement than forecasts initialized 12 h earlier at 0000 UTC, and the ensemble forecasts were typically more skillful than GFS-initialized forecasts. At times, 0000 UTC GFS-initialized forecasts had temporal distributions of domain-average rainfall closer to observations than EnKF-initialized forecasts. However, particularly when GFS analyses initialized WRF Model forecasts, 1200 UTC forecasts produced more rainfall during the first diurnal maximum than 0000 UTC forecasts. This behavior was mostly attributed to WRF Model initialization of clouds and moist physical processes. The success of these real-time ensemble forecasts demonstrates the feasibility of using limited-area continuously cycling EnKFs as a method to initialize convection-allowing ensemble forecasts, and future real-time high-resolution ensemble development leveraging EnKFs seems justified.
Abstract
This study assesses forecasts of the preconvective and near-storm environments from the convection-allowing models run for the 2008 National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) spring experiment. Evaluating the performance of convection-allowing models (CAMs) is important for encouraging their appropriate use and development for both research and operations. Systematic errors in the CAM forecasts included a cold bias in mean 2-m and 850-hPa temperatures over most of the United States and smaller than observed vertical wind shear and 850-hPa moisture over the high plains. The placement of airmass boundaries was similar in forecasts from the CAMs and the operational North American Mesoscale (NAM) model that provided the initial and boundary conditions. This correspondence contributed to similar characteristics for spatial and temporal mean error patterns. However, substantial errors were found in the CAM forecasts away from airmass boundaries. The result is that the deterministic CAMs do not predict the environment as well as the NAM. It is suggested that parameterized processes used at convection-allowing grid lengths, particularly in the boundary layer, may be contributing to these errors.
It is also shown that mean forecasts from an ensemble of CAMs were substantially more accurate than forecasts from deterministic CAMs. If the improvement seen in the CAM forecasts when going from a deterministic framework to an ensemble framework is comparable to improvements in mesoscale model forecasts when going from a deterministic to an ensemble framework, then an ensemble of mesoscale model forecasts could predict the environment even better than an ensemble of CAMs. Therefore, it is suggested that the combination of mesoscale (convection parameterizing) and CAM configurations is an appropriate avenue to explore for optimizing the use of limited computer resources for severe weather forecasting applications.
Abstract
This study assesses forecasts of the preconvective and near-storm environments from the convection-allowing models run for the 2008 National Oceanic and Atmospheric Administration (NOAA) Hazardous Weather Testbed (HWT) spring experiment. Evaluating the performance of convection-allowing models (CAMs) is important for encouraging their appropriate use and development for both research and operations. Systematic errors in the CAM forecasts included a cold bias in mean 2-m and 850-hPa temperatures over most of the United States and smaller than observed vertical wind shear and 850-hPa moisture over the high plains. The placement of airmass boundaries was similar in forecasts from the CAMs and the operational North American Mesoscale (NAM) model that provided the initial and boundary conditions. This correspondence contributed to similar characteristics for spatial and temporal mean error patterns. However, substantial errors were found in the CAM forecasts away from airmass boundaries. The result is that the deterministic CAMs do not predict the environment as well as the NAM. It is suggested that parameterized processes used at convection-allowing grid lengths, particularly in the boundary layer, may be contributing to these errors.
It is also shown that mean forecasts from an ensemble of CAMs were substantially more accurate than forecasts from deterministic CAMs. If the improvement seen in the CAM forecasts when going from a deterministic framework to an ensemble framework is comparable to improvements in mesoscale model forecasts when going from a deterministic to an ensemble framework, then an ensemble of mesoscale model forecasts could predict the environment even better than an ensemble of CAMs. Therefore, it is suggested that the combination of mesoscale (convection parameterizing) and CAM configurations is an appropriate avenue to explore for optimizing the use of limited computer resources for severe weather forecasting applications.
Abstract
Sensitivity of 0–12-h warm-season precipitation forecasts to atmospheric initial conditions, including those from different large-scale model analyses and from rapid cycled (RC) three-dimensional variational data assimilations (3DVAR) with and without radar data, is investigated for a 6-day period during the International H2O Project. Neighborhood-based precipitation verification is used to compare forecasts made with the Advanced Research core of the Weather Research and Forecasting Model (ARW-WRF). Three significant convective episodes are examined by comparing the precipitation patterns and locations from different forecast experiments. From two of these three case studies, causes for the success and failure of the RC data assimilation in improving forecast skill are shown. Results indicate that the use of higher-resolution analysis in the initialization, rapid update cycling via WRF 3DVAR data assimilation, and the additional assimilation of radar observations each play a role in shortening the period of the initial precipitation spinup as well as in placing storms closer to observations, thus improving precipitation forecast skill by up to 8–9 h. Impacts of data assimilation differ for forecasts initialized at 0000 and 1200 UTC. The case studies show that the pattern and location of the forecasted precipitation were noticeably improved with radar data assimilation for the two late afternoon cases that featured lines of convection driven by surface-based cold pools. In contrast, the RC 3DVAR, both with and without radar data, had negative impacts on convective forecasts for a case of morning elevated convection associated with a midlatitude short-wave trough.
Abstract
Sensitivity of 0–12-h warm-season precipitation forecasts to atmospheric initial conditions, including those from different large-scale model analyses and from rapid cycled (RC) three-dimensional variational data assimilations (3DVAR) with and without radar data, is investigated for a 6-day period during the International H2O Project. Neighborhood-based precipitation verification is used to compare forecasts made with the Advanced Research core of the Weather Research and Forecasting Model (ARW-WRF). Three significant convective episodes are examined by comparing the precipitation patterns and locations from different forecast experiments. From two of these three case studies, causes for the success and failure of the RC data assimilation in improving forecast skill are shown. Results indicate that the use of higher-resolution analysis in the initialization, rapid update cycling via WRF 3DVAR data assimilation, and the additional assimilation of radar observations each play a role in shortening the period of the initial precipitation spinup as well as in placing storms closer to observations, thus improving precipitation forecast skill by up to 8–9 h. Impacts of data assimilation differ for forecasts initialized at 0000 and 1200 UTC. The case studies show that the pattern and location of the forecasted precipitation were noticeably improved with radar data assimilation for the two late afternoon cases that featured lines of convection driven by surface-based cold pools. In contrast, the RC 3DVAR, both with and without radar data, had negative impacts on convective forecasts for a case of morning elevated convection associated with a midlatitude short-wave trough.
Abstract
During the 2005 NOAA Hazardous Weather Testbed Spring Experiment two different high-resolution configurations of the Weather Research and Forecasting-Advanced Research WRF (WRF-ARW) model were used to produce 30-h forecasts 5 days a week for a total of 7 weeks. These configurations used the same physical parameterizations and the same input dataset for the initial and boundary conditions, differing primarily in their spatial resolution. The first set of runs used 4-km horizontal grid spacing with 35 vertical levels while the second used 2-km grid spacing and 51 vertical levels.
Output from these daily forecasts is analyzed to assess the numerical forecast sensitivity to spatial resolution in the upper end of the convection-allowing range of grid spacing. The focus is on the central United States and the time period 18–30 h after model initialization. The analysis is based on a combination of visual comparison, systematic subjective verification conducted during the Spring Experiment, and objective metrics based largely on the mean diurnal cycle of the simulated reflectivity and precipitation fields. Additional insight is gained by examining the size distributions of the individual reflectivity and precipitation entities, and by comparing forecasts of mesocyclone occurrence in the two sets of forecasts.
In general, the 2-km forecasts provide more detailed presentations of convective activity, but there appears to be little, if any, forecast skill on the scales where the added details emerge. On the scales where both model configurations show higher levels of skill—the scale of mesoscale convective features—the numerical forecasts appear to provide comparable utility as guidance for severe weather forecasters. These results suggest that, for the geographical, phenomenological, and temporal parameters of this study, any added value provided by decreasing the grid increment from 4 to 2 km (with commensurate adjustments to the vertical resolution) may not be worth the considerable increases in computational expense.
Abstract
During the 2005 NOAA Hazardous Weather Testbed Spring Experiment two different high-resolution configurations of the Weather Research and Forecasting-Advanced Research WRF (WRF-ARW) model were used to produce 30-h forecasts 5 days a week for a total of 7 weeks. These configurations used the same physical parameterizations and the same input dataset for the initial and boundary conditions, differing primarily in their spatial resolution. The first set of runs used 4-km horizontal grid spacing with 35 vertical levels while the second used 2-km grid spacing and 51 vertical levels.
Output from these daily forecasts is analyzed to assess the numerical forecast sensitivity to spatial resolution in the upper end of the convection-allowing range of grid spacing. The focus is on the central United States and the time period 18–30 h after model initialization. The analysis is based on a combination of visual comparison, systematic subjective verification conducted during the Spring Experiment, and objective metrics based largely on the mean diurnal cycle of the simulated reflectivity and precipitation fields. Additional insight is gained by examining the size distributions of the individual reflectivity and precipitation entities, and by comparing forecasts of mesocyclone occurrence in the two sets of forecasts.
In general, the 2-km forecasts provide more detailed presentations of convective activity, but there appears to be little, if any, forecast skill on the scales where the added details emerge. On the scales where both model configurations show higher levels of skill—the scale of mesoscale convective features—the numerical forecasts appear to provide comparable utility as guidance for severe weather forecasters. These results suggest that, for the geographical, phenomenological, and temporal parameters of this study, any added value provided by decreasing the grid increment from 4 to 2 km (with commensurate adjustments to the vertical resolution) may not be worth the considerable increases in computational expense.
The Bow Echo and MCV Experiment: Observations and Opportunities
Observations and Opportunities
The Bow Echo and Mesoscale Convective Vortex Experiment (BAMEX) is a research investigation using highly mobile platforms to examine the life cycles of mesoscale convective systems. It represents a combination of two related investigations to study (a) bow echoes, principally those that produce damaging surface winds and last at least 4 h, and (b) larger convective systems that produce long-lived mesoscale convective vortices (MCVs). The field phase of BAMEX utilized three instrumented research aircraft and an array of mobile ground-based instruments. Two long-range turboprop aircraft were equipped with pseudo-dual-Doppler radar capability, the third aircraft was a jet equipped with dropsondes. The aircraft documented the environmental structure of mesoscale convective systems (MCSs), observed the kinematic and thermodynamic structure of the convective line and stratiform regions (where rear-inflow jets and MCVs reside), and captured the structure of mature MCVs. The ground-based instruments augmented sounding coverage and documented the thermodynamic structure of the PBL, both within MCSs and in their environment. The present article reviews the scientific goals of the study and the facility deployment strategy, summarizes the cases observed, and highlights the forthcoming significant research directions and opportunities.
The Bow Echo and Mesoscale Convective Vortex Experiment (BAMEX) is a research investigation using highly mobile platforms to examine the life cycles of mesoscale convective systems. It represents a combination of two related investigations to study (a) bow echoes, principally those that produce damaging surface winds and last at least 4 h, and (b) larger convective systems that produce long-lived mesoscale convective vortices (MCVs). The field phase of BAMEX utilized three instrumented research aircraft and an array of mobile ground-based instruments. Two long-range turboprop aircraft were equipped with pseudo-dual-Doppler radar capability, the third aircraft was a jet equipped with dropsondes. The aircraft documented the environmental structure of mesoscale convective systems (MCSs), observed the kinematic and thermodynamic structure of the convective line and stratiform regions (where rear-inflow jets and MCVs reside), and captured the structure of mature MCVs. The ground-based instruments augmented sounding coverage and documented the thermodynamic structure of the PBL, both within MCSs and in their environment. The present article reviews the scientific goals of the study and the facility deployment strategy, summarizes the cases observed, and highlights the forthcoming significant research directions and opportunities.
During May–July 2000, the Severe Thunderstorm Electrification and Precipitation Study (STEPS) occurred in the High Plains, near the Colorado–Kansas border. STEPS aimed to achieve a better understanding of the interactions between kinematics, precipitation, and electrification in severe thunderstorms. Specific scientific objectives included 1) understanding the apparent major differences in precipitation output from supercells that have led to them being classified as low precipitation (LP), classic or medium precipitation, and high precipitation; 2) understanding lightning formation and behavior in storms, and how lightning differs among storm types, particularly to better understand the mechanisms by which storms produce predominantly positive cloud-to-ground (CG) lightning; and 3) verifying and improving microphysical interpretations from polarimetric radar. The project involved the use of a multiple-Doppler polarimetric radar network, as well as a time-of-arrival very high frequency (VHF) lightning mapping system, an armored research aircraft, electric field meters carried on balloons, mobile mesonet vehicles, instruments to detect and classify transient luminous events (TLEs; e.g., sprites and blue jets) over thunderstorms, and mobile atmospheric sounding equipment. The project featured significant collaboration with the local National Weather Service office in Goodland, Kansas, as well as outreach to the general public. The project gathered data on a number of different cases, including LP storms, supercells, and mesoscale convective systems, among others. Many of the storms produced mostly positive CG lightning during significant portions of their lifetimes and also exhibited unusual electrical structures with opposite polarity to ordinary thunderstorms. The field data from STEPS is expected to bring new advances to understanding of supercells, positive CG lightning, TLEs, and precipitation formation in convective storms.
During May–July 2000, the Severe Thunderstorm Electrification and Precipitation Study (STEPS) occurred in the High Plains, near the Colorado–Kansas border. STEPS aimed to achieve a better understanding of the interactions between kinematics, precipitation, and electrification in severe thunderstorms. Specific scientific objectives included 1) understanding the apparent major differences in precipitation output from supercells that have led to them being classified as low precipitation (LP), classic or medium precipitation, and high precipitation; 2) understanding lightning formation and behavior in storms, and how lightning differs among storm types, particularly to better understand the mechanisms by which storms produce predominantly positive cloud-to-ground (CG) lightning; and 3) verifying and improving microphysical interpretations from polarimetric radar. The project involved the use of a multiple-Doppler polarimetric radar network, as well as a time-of-arrival very high frequency (VHF) lightning mapping system, an armored research aircraft, electric field meters carried on balloons, mobile mesonet vehicles, instruments to detect and classify transient luminous events (TLEs; e.g., sprites and blue jets) over thunderstorms, and mobile atmospheric sounding equipment. The project featured significant collaboration with the local National Weather Service office in Goodland, Kansas, as well as outreach to the general public. The project gathered data on a number of different cases, including LP storms, supercells, and mesoscale convective systems, among others. Many of the storms produced mostly positive CG lightning during significant portions of their lifetimes and also exhibited unusual electrical structures with opposite polarity to ordinary thunderstorms. The field data from STEPS is expected to bring new advances to understanding of supercells, positive CG lightning, TLEs, and precipitation formation in convective storms.
Abstract
The Mesoscale Predictability Experiment (MPEX) was conducted from 15 May to 15 June 2013 in the central United States. MPEX was motivated by the basic question of whether experimental, subsynoptic observations can extend convective-scale predictability and otherwise enhance skill in short-term regional numerical weather prediction.
Observational tools for MPEX included the National Science Foundation (NSF)–National Center for Atmospheric Research (NCAR) Gulfstream V aircraft (GV), which featured the Airborne Vertical Atmospheric Profiling System mini-dropsonde system and a microwave temperature-profiling (MTP) system as well as several ground-based mobile upsonde systems. Basic operations involved two missions per day: an early morning mission with the GV, well upstream of anticipated convective storms, and an afternoon and early evening mission with the mobile sounding units to sample the initiation and upscale feedbacks of the convection.
A total of 18 intensive observing periods (IOPs) were completed during the field phase, representing a wide spectrum of synoptic regimes and convective events, including several major severe weather and/or tornado outbreak days. The novel observational strategy employed during MPEX is documented herein, as is the unique role of the ensemble modeling efforts—which included an ensemble sensitivity analysis—to both guide the observational strategies and help address the potential impacts of such enhanced observations on short-term convective forecasting. Preliminary results of retrospective data assimilation experiments are discussed, as are data analyses showing upscale convective feedbacks.
Abstract
The Mesoscale Predictability Experiment (MPEX) was conducted from 15 May to 15 June 2013 in the central United States. MPEX was motivated by the basic question of whether experimental, subsynoptic observations can extend convective-scale predictability and otherwise enhance skill in short-term regional numerical weather prediction.
Observational tools for MPEX included the National Science Foundation (NSF)–National Center for Atmospheric Research (NCAR) Gulfstream V aircraft (GV), which featured the Airborne Vertical Atmospheric Profiling System mini-dropsonde system and a microwave temperature-profiling (MTP) system as well as several ground-based mobile upsonde systems. Basic operations involved two missions per day: an early morning mission with the GV, well upstream of anticipated convective storms, and an afternoon and early evening mission with the mobile sounding units to sample the initiation and upscale feedbacks of the convection.
A total of 18 intensive observing periods (IOPs) were completed during the field phase, representing a wide spectrum of synoptic regimes and convective events, including several major severe weather and/or tornado outbreak days. The novel observational strategy employed during MPEX is documented herein, as is the unique role of the ensemble modeling efforts—which included an ensemble sensitivity analysis—to both guide the observational strategies and help address the potential impacts of such enhanced observations on short-term convective forecasting. Preliminary results of retrospective data assimilation experiments are discussed, as are data analyses showing upscale convective feedbacks.
Abstract
The Deep Convective Clouds and Chemistry (DC3) field experiment produced an exceptional dataset on thunderstorms, including their dynamical, physical, and electrical structures and their impact on the chemical composition of the troposphere. The field experiment gathered detailed information on the chemical composition of the inflow and outflow regions of midlatitude thunderstorms in northeast Colorado, west Texas to central Oklahoma, and northern Alabama. A unique aspect of the DC3 strategy was to locate and sample the convective outflow a day after active convection in order to measure the chemical transformations within the upper-tropospheric convective plume. These data are being analyzed to investigate transport and dynamics of the storms, scavenging of soluble trace gases and aerosols, production of nitrogen oxides by lightning, relationships between lightning flash rates and storm parameters, chemistry in the upper troposphere that is affected by the convection, and related source characterization of the three sampling regions. DC3 also documented biomass-burning plumes and the interactions of these plumes with deep convection.
Abstract
The Deep Convective Clouds and Chemistry (DC3) field experiment produced an exceptional dataset on thunderstorms, including their dynamical, physical, and electrical structures and their impact on the chemical composition of the troposphere. The field experiment gathered detailed information on the chemical composition of the inflow and outflow regions of midlatitude thunderstorms in northeast Colorado, west Texas to central Oklahoma, and northern Alabama. A unique aspect of the DC3 strategy was to locate and sample the convective outflow a day after active convection in order to measure the chemical transformations within the upper-tropospheric convective plume. These data are being analyzed to investigate transport and dynamics of the storms, scavenging of soluble trace gases and aerosols, production of nitrogen oxides by lightning, relationships between lightning flash rates and storm parameters, chemistry in the upper troposphere that is affected by the convection, and related source characterization of the three sampling regions. DC3 also documented biomass-burning plumes and the interactions of these plumes with deep convection.