Search Results
You are looking at 51 - 60 of 98 items for
- Author or Editor: Xuguang Wang x
- Refine by Access: All Content x
Abstract
Maximizing the accuracy of ensemble Kalman filtering (EnKF) radar data assimilation requires that the observation operator sample the model state in the same manner that the radar sampled the atmosphere. It may therefore be desirable to include volume averaging and power weighting in the observation operator. This study examines the impact of including radar-sampling effects in the Doppler velocity observation operator on EnKF analyses and forecasts. Locally substantial differences are found between a simple point operator and a realistic radar-sampling operator when they are applied to the model state at a single time. However, assimilation results indicate that the radar-sampling operator does not substantially improve the EnKF analyses or forecasts, and it greatly increases the computational cost of the data assimilation.
Abstract
Maximizing the accuracy of ensemble Kalman filtering (EnKF) radar data assimilation requires that the observation operator sample the model state in the same manner that the radar sampled the atmosphere. It may therefore be desirable to include volume averaging and power weighting in the observation operator. This study examines the impact of including radar-sampling effects in the Doppler velocity observation operator on EnKF analyses and forecasts. Locally substantial differences are found between a simple point operator and a realistic radar-sampling operator when they are applied to the model state at a single time. However, assimilation results indicate that the radar-sampling operator does not substantially improve the EnKF analyses or forecasts, and it greatly increases the computational cost of the data assimilation.
Abstract
A scale-dependent localization (SDL) method was formulated and implemented in the Gridpoint Statistical Interpolation (GSI)-based four-dimensional ensemble-variational (4DEnVar) system for NCEP FV3-based Global Forecast System (GFS). SDL applies different localization to different scales of ensemble covariances, while performing a single-step simultaneous assimilation of all available observations. Two SDL variants with (SDL-Cross) and without (SDL-NoCross) considering cross-wave-band covariances were examined. The performance of two- and three-wave-band SDL experiments (W2 and W3, respectively) was evaluated through 1-month cycled data assimilation experiments. SDL improves global forecasts to 5 days over scale-invariant localization including the operationally tuned level-dependent scale-invariant localization (W1-Ope). The W3 SDL-Cross experiment shows more accurate tropical storm–track forecasts at shorter lead times than W1-Ope. Compared to the W2 SDL experiments, the W3 SDL counterparts applying tighter horizontal localization at medium-scale wave band generally show improved global forecasts below 100 hPa, but degraded global forecasts above 50 hPa. While the outperformance of the W3 SDL-NoCross experiment versus the W2 SDL-NoCross experiment below 100 hPa lasts for 5 days, that of the W3 SDL-Cross experiment versus the W2 SDL-Cross experiment lasts for 3 days. Due to local spatial averaging of ensemble covariances that may alleviate sampling error, the SDL-NoCross experiments show slightly better forecasts than the SDL-Cross experiments at shorter lead times. However, the SDL-Cross experiments outperform the SDL-NoCross experiments at longer lead times, likely from retention of more heterogeneity of ensemble covariances and resultant analyses with improved balance. Relative performance of tropical storm–track forecasts in the W2 and W3 SDL experiments are generally consistent with that of global forecasts.
Abstract
A scale-dependent localization (SDL) method was formulated and implemented in the Gridpoint Statistical Interpolation (GSI)-based four-dimensional ensemble-variational (4DEnVar) system for NCEP FV3-based Global Forecast System (GFS). SDL applies different localization to different scales of ensemble covariances, while performing a single-step simultaneous assimilation of all available observations. Two SDL variants with (SDL-Cross) and without (SDL-NoCross) considering cross-wave-band covariances were examined. The performance of two- and three-wave-band SDL experiments (W2 and W3, respectively) was evaluated through 1-month cycled data assimilation experiments. SDL improves global forecasts to 5 days over scale-invariant localization including the operationally tuned level-dependent scale-invariant localization (W1-Ope). The W3 SDL-Cross experiment shows more accurate tropical storm–track forecasts at shorter lead times than W1-Ope. Compared to the W2 SDL experiments, the W3 SDL counterparts applying tighter horizontal localization at medium-scale wave band generally show improved global forecasts below 100 hPa, but degraded global forecasts above 50 hPa. While the outperformance of the W3 SDL-NoCross experiment versus the W2 SDL-NoCross experiment below 100 hPa lasts for 5 days, that of the W3 SDL-Cross experiment versus the W2 SDL-Cross experiment lasts for 3 days. Due to local spatial averaging of ensemble covariances that may alleviate sampling error, the SDL-NoCross experiments show slightly better forecasts than the SDL-Cross experiments at shorter lead times. However, the SDL-Cross experiments outperform the SDL-NoCross experiments at longer lead times, likely from retention of more heterogeneity of ensemble covariances and resultant analyses with improved balance. Relative performance of tropical storm–track forecasts in the W2 and W3 SDL experiments are generally consistent with that of global forecasts.
Abstract
This paper presents a case study from an intensive observing period (IOP) during the Plains Elevated Convection at Night (PECAN) field experiment that was focused on a bore generated by nocturnal convection. Observations from PECAN IOP 25 on 11 July 2015 are used to evaluate the performance of high-resolution Weather Research and Forecasting Model forecasts, initialized using the Gridpoint Statistical Interpolation (GSI)-based ensemble Kalman filter. The focus is on understanding model errors and sensitivities in order to guide forecast improvements for bores associated with nocturnal convection. Model simulations of the bore amplitude are compared against eight retrieved vertical cross sections through the bore during the IOP. Sensitivities of forecasts to microphysics and planetary boundary layer (PBL) parameterizations are also investigated. Forecasts initialized before the bore pulls away from the convection show a more realistic bore than forecasts initialized later from analyses of the bore itself, in part due to the smoothing of the existing bore in the ensemble mean. Experiments show that the different microphysics schemes impact the quality of the simulations with unrealistically weak cold pools and bores with the Thompson and Morrison microphysics schemes, cold pools too strong with the WDM6 and more accurate with the WSM6 schemes. Most PBL schemes produced a realistic bore response to the cold pool, with the exception of the Mellor–Yamada–Nakanishi–Niino (MYNN) scheme, which creates too much turbulent mixing atop the bore. A new method of objectively estimating the depth of the near-surface stable layer corresponding to a simple two-layer model is also introduced, and the impacts of turbulent mixing on this estimate are discussed.
Abstract
This paper presents a case study from an intensive observing period (IOP) during the Plains Elevated Convection at Night (PECAN) field experiment that was focused on a bore generated by nocturnal convection. Observations from PECAN IOP 25 on 11 July 2015 are used to evaluate the performance of high-resolution Weather Research and Forecasting Model forecasts, initialized using the Gridpoint Statistical Interpolation (GSI)-based ensemble Kalman filter. The focus is on understanding model errors and sensitivities in order to guide forecast improvements for bores associated with nocturnal convection. Model simulations of the bore amplitude are compared against eight retrieved vertical cross sections through the bore during the IOP. Sensitivities of forecasts to microphysics and planetary boundary layer (PBL) parameterizations are also investigated. Forecasts initialized before the bore pulls away from the convection show a more realistic bore than forecasts initialized later from analyses of the bore itself, in part due to the smoothing of the existing bore in the ensemble mean. Experiments show that the different microphysics schemes impact the quality of the simulations with unrealistically weak cold pools and bores with the Thompson and Morrison microphysics schemes, cold pools too strong with the WDM6 and more accurate with the WSM6 schemes. Most PBL schemes produced a realistic bore response to the cold pool, with the exception of the Mellor–Yamada–Nakanishi–Niino (MYNN) scheme, which creates too much turbulent mixing atop the bore. A new method of objectively estimating the depth of the near-surface stable layer corresponding to a simple two-layer model is also introduced, and the impacts of turbulent mixing on this estimate are discussed.
Abstract
The initiation of new convection at night in the Great Plains contributes to a nocturnal maximum in precipitation and produces localized heavy rainfall and severe weather hazards in the region. Although previous work has evaluated numerical model forecasts and data assimilation (DA) impacts for convection initiation (CI), most previous studies focused only on convection that initiates during the afternoon and not explicitly on nocturnal thunderstorms. In this study, we investigate the impact of assimilating in situ and radar observations for a nocturnal CI event on 25 June 2013 using an ensemble-based DA and forecast system. Results in this study show that a successful CI forecast resulted only when assimilating conventional in situ observations on the inner, convection-allowing domain. Assimilating in situ observations strengthened preexisting convection in southwestern Kansas by enhancing buoyancy and locally strengthening low-level convergence. The enhanced convection produced a cold pool that, together with increased convergence along the northwestern low-level jet (LLJ) terminus near the region of CI, was an important mechanism for lifting parcels to their level of free convection. Gravity waves were also produced atop the cold pool that provided further elevated ascent. Assimilating radar observations further improved the forecast by suppressing spurious convection and reducing the number of ensemble members that produced CI along a spurious outflow boundary. The fact that the successful CI forecasts resulted only when the in situ observations were assimilated suggests that accurately capturing the preconvective environment and specific mesoscale features is especially important for nocturnal CI forecasts.
Abstract
The initiation of new convection at night in the Great Plains contributes to a nocturnal maximum in precipitation and produces localized heavy rainfall and severe weather hazards in the region. Although previous work has evaluated numerical model forecasts and data assimilation (DA) impacts for convection initiation (CI), most previous studies focused only on convection that initiates during the afternoon and not explicitly on nocturnal thunderstorms. In this study, we investigate the impact of assimilating in situ and radar observations for a nocturnal CI event on 25 June 2013 using an ensemble-based DA and forecast system. Results in this study show that a successful CI forecast resulted only when assimilating conventional in situ observations on the inner, convection-allowing domain. Assimilating in situ observations strengthened preexisting convection in southwestern Kansas by enhancing buoyancy and locally strengthening low-level convergence. The enhanced convection produced a cold pool that, together with increased convergence along the northwestern low-level jet (LLJ) terminus near the region of CI, was an important mechanism for lifting parcels to their level of free convection. Gravity waves were also produced atop the cold pool that provided further elevated ascent. Assimilating radar observations further improved the forecast by suppressing spurious convection and reducing the number of ensemble members that produced CI along a spurious outflow boundary. The fact that the successful CI forecasts resulted only when the in situ observations were assimilated suggests that accurately capturing the preconvective environment and specific mesoscale features is especially important for nocturnal CI forecasts.
Abstract
The Geostationary Operational Environmental Satellite-R Series will provide cloud-top observations on the convective scale at roughly the same frequency as Doppler radar observations. To evaluate the potential value of cloud-top temperature observations for data assimilation, an imperfect-model observing system simulation experiment is used. Synthetic cloud-top temperature observations from an idealized splitting supercell created using the Weather Research and Forecasting Model are assimilated along with synthetic radar reflectivity and radial velocity using an ensemble Kalman filter. Observations are assimilated every 5 min for 2.5 h with additive noise used to maintain ensemble spread.
Four experiments are conducted to explore the relative value of cloud-top temperature and radar observations. One experiment only assimilates satellite data, another only assimilates radar data, and two more experiments assimilate both radar and satellite observations, but with the observation types assimilated in different order. Results show a rather weak correlation between cloud-top temperature and horizontal winds, whereas larger correlations are found between cloud-top temperature and microphysics variables. However, the assimilation of cloud-top temperature data alone produces a supercell storm in the ensemble, although the resulting ensemble has much larger spread compared to the ensembles of radar inclusive experiments. The addition of radar observations greatly improves the storm structure and reduces the overprediction of storm extent. Results further show that assimilating cloud-top temperature observations in addition to radar data does not lead to an improved forecast. However, assimilating cloud-top temperature can produce reasonable forecasts for areas lacking radar coverage.
Abstract
The Geostationary Operational Environmental Satellite-R Series will provide cloud-top observations on the convective scale at roughly the same frequency as Doppler radar observations. To evaluate the potential value of cloud-top temperature observations for data assimilation, an imperfect-model observing system simulation experiment is used. Synthetic cloud-top temperature observations from an idealized splitting supercell created using the Weather Research and Forecasting Model are assimilated along with synthetic radar reflectivity and radial velocity using an ensemble Kalman filter. Observations are assimilated every 5 min for 2.5 h with additive noise used to maintain ensemble spread.
Four experiments are conducted to explore the relative value of cloud-top temperature and radar observations. One experiment only assimilates satellite data, another only assimilates radar data, and two more experiments assimilate both radar and satellite observations, but with the observation types assimilated in different order. Results show a rather weak correlation between cloud-top temperature and horizontal winds, whereas larger correlations are found between cloud-top temperature and microphysics variables. However, the assimilation of cloud-top temperature data alone produces a supercell storm in the ensemble, although the resulting ensemble has much larger spread compared to the ensembles of radar inclusive experiments. The addition of radar observations greatly improves the storm structure and reduces the overprediction of storm extent. Results further show that assimilating cloud-top temperature observations in addition to radar data does not lead to an improved forecast. However, assimilating cloud-top temperature can produce reasonable forecasts for areas lacking radar coverage.
Abstract
A novel object-based algorithm capable of identifying and tracking convective outflow boundaries in convection-allowing numerical models is presented in this study. The most distinct feature of the proposed algorithm is its ability to seamlessly analyze numerically simulated density currents and bores, both of which play an important role in the dynamics of nocturnal convective systems. The unified identification and classification of these morphologically different phenomena is achieved through a multivariate approach combined with appropriate image processing techniques. The tracking component of the algorithm utilizes two dynamical constraints, which improve the object association results in comparison to methods based on statistical assumptions alone. Special attention is placed on some of the outstanding challenges regarding the formulation of the algorithm and possible ways to address those in future research. Apart from describing the technical details behind the algorithm, this study also introduces specific algorithm applications relevant to the analysis and prediction of bores. These applications are illustrated for a retrospective case study simulated with a convection-allowing ensemble prediction system. The paper highlights how the newly developed algorithm tools naturally form a foundation for understanding the initiation, structure, and evolution of bores and convective systems in the nocturnal environment.
Abstract
A novel object-based algorithm capable of identifying and tracking convective outflow boundaries in convection-allowing numerical models is presented in this study. The most distinct feature of the proposed algorithm is its ability to seamlessly analyze numerically simulated density currents and bores, both of which play an important role in the dynamics of nocturnal convective systems. The unified identification and classification of these morphologically different phenomena is achieved through a multivariate approach combined with appropriate image processing techniques. The tracking component of the algorithm utilizes two dynamical constraints, which improve the object association results in comparison to methods based on statistical assumptions alone. Special attention is placed on some of the outstanding challenges regarding the formulation of the algorithm and possible ways to address those in future research. Apart from describing the technical details behind the algorithm, this study also introduces specific algorithm applications relevant to the analysis and prediction of bores. These applications are illustrated for a retrospective case study simulated with a convection-allowing ensemble prediction system. The paper highlights how the newly developed algorithm tools naturally form a foundation for understanding the initiation, structure, and evolution of bores and convective systems in the nocturnal environment.
Abstract
The Mesoscale Predictability Experiment (MPEX) conducted during the spring of 2013 included frequent coordinated sampling of near-storm environments via upsondes. These unique observations were taken to better understand the upscale effects of deep convection on the environment, and are used to validate the accuracy of convection-allowing (Δx = 3 km) model ensemble analyses. A 36-member ensemble was created with physics diversity using the Weather Research and Forecasting Model, and observations were assimilated via the Data Assimilation Research Testbed using an ensemble adjustment Kalman filter. A 4-day sequence of convective events from 28 to 31 May 2013 in the south-central United States was analyzed by assimilating Doppler radar and conventional observations. No MPEX upsonde observations were assimilated. Since the ensemble mean analyses produce an accurate depiction of the storms, the MPEX observations are used to verify the accuracy of the analyses of the near-storm environment.
A total of 81 upsondes were released over the 4-day period, sampling different regions of near-storm environments including storm inflow, outflow, and anvil. The MPEX observations reveal modest analysis errors overall when considering all samples, although specific environmental regions reveal larger errors in some state fields. The ensemble analyses underestimate cold pool depth, and storm inflow meridional winds have a pronounced northerly bias that results from an underprediction of inflow wind speed magnitude. Most bias distributions are Gaussian-like, with a few being bimodal owing to systematic biases of certain state fields and environmental regions.
Abstract
The Mesoscale Predictability Experiment (MPEX) conducted during the spring of 2013 included frequent coordinated sampling of near-storm environments via upsondes. These unique observations were taken to better understand the upscale effects of deep convection on the environment, and are used to validate the accuracy of convection-allowing (Δx = 3 km) model ensemble analyses. A 36-member ensemble was created with physics diversity using the Weather Research and Forecasting Model, and observations were assimilated via the Data Assimilation Research Testbed using an ensemble adjustment Kalman filter. A 4-day sequence of convective events from 28 to 31 May 2013 in the south-central United States was analyzed by assimilating Doppler radar and conventional observations. No MPEX upsonde observations were assimilated. Since the ensemble mean analyses produce an accurate depiction of the storms, the MPEX observations are used to verify the accuracy of the analyses of the near-storm environment.
A total of 81 upsondes were released over the 4-day period, sampling different regions of near-storm environments including storm inflow, outflow, and anvil. The MPEX observations reveal modest analysis errors overall when considering all samples, although specific environmental regions reveal larger errors in some state fields. The ensemble analyses underestimate cold pool depth, and storm inflow meridional winds have a pronounced northerly bias that results from an underprediction of inflow wind speed magnitude. Most bias distributions are Gaussian-like, with a few being bimodal owing to systematic biases of certain state fields and environmental regions.
Abstract
A hybrid ensemble transform Kalman filter–three-dimensional variational data assimilation (ETKF–3DVAR) system for the Weather Research and Forecasting (WRF) Model is introduced. The system is based on the existing WRF 3DVAR. Unlike WRF 3DVAR, which utilizes a simple, static covariance model to estimate the forecast-error statistics, the hybrid system combines ensemble covariances with the static covariances to estimate the complex, flow-dependent forecast-error statistics. Ensemble covariances are incorporated by using the extended control variable method during the variational minimization. The ensemble perturbations are maintained by the computationally efficient ETKF. As an initial attempt to test and understand the newly developed system, both an observing system simulation experiment under the perfect model assumption (Part I) and the real observation experiment (Part II) were conducted. In these pilot studies, the WRF was run over the North America domain at a coarse grid spacing (200 km) to emphasize synoptic scales, owing to limited computational resources and the large number of experiments conducted. In Part I, simulated radiosonde wind and temperature observations were assimilated. The results demonstrated that the hybrid data assimilation method provided more accurate analyses than the 3DVAR. The horizontal distributions of the errors demonstrated the hybrid analyses had larger improvements over data-sparse regions than over data-dense regions. It was also found that the ETKF ensemble spread in general agreed with the root-mean-square background forecast error for both the first- and second-order measures. Given the coarse resolution, relatively sparse observation network, and perfect model assumption adopted in this part of the study, caution is warranted when extrapolating the results to operational applications.
Abstract
A hybrid ensemble transform Kalman filter–three-dimensional variational data assimilation (ETKF–3DVAR) system for the Weather Research and Forecasting (WRF) Model is introduced. The system is based on the existing WRF 3DVAR. Unlike WRF 3DVAR, which utilizes a simple, static covariance model to estimate the forecast-error statistics, the hybrid system combines ensemble covariances with the static covariances to estimate the complex, flow-dependent forecast-error statistics. Ensemble covariances are incorporated by using the extended control variable method during the variational minimization. The ensemble perturbations are maintained by the computationally efficient ETKF. As an initial attempt to test and understand the newly developed system, both an observing system simulation experiment under the perfect model assumption (Part I) and the real observation experiment (Part II) were conducted. In these pilot studies, the WRF was run over the North America domain at a coarse grid spacing (200 km) to emphasize synoptic scales, owing to limited computational resources and the large number of experiments conducted. In Part I, simulated radiosonde wind and temperature observations were assimilated. The results demonstrated that the hybrid data assimilation method provided more accurate analyses than the 3DVAR. The horizontal distributions of the errors demonstrated the hybrid analyses had larger improvements over data-sparse regions than over data-dense regions. It was also found that the ETKF ensemble spread in general agreed with the root-mean-square background forecast error for both the first- and second-order measures. Given the coarse resolution, relatively sparse observation network, and perfect model assumption adopted in this part of the study, caution is warranted when extrapolating the results to operational applications.
Abstract
The hybrid ensemble transform Kalman filter–three-dimensional variational data assimilation (ETKF–3DVAR) system developed for the Weather Research and Forecasting (WRF) Model was further tested with real observations, as a follow-up for the observation system simulation experiment (OSSE) conducted in Part I. A domain encompassing North America was considered. Because of limited computational resources and the large number of experiments conducted, the forecasts and analyses employed relatively coarse grid spacing (200 km) to emphasize synoptic scales. As a first effort to explore the new system with real observations, relatively sparse observation datasets consisting of radiosonde wind and temperature during 4 weeks of January 2003 were assimilated. The 12-h forecasts produced by the hybrid analysis produced less root-mean-square error than the 3DVAR. The hybrid improved the forecast more in the western part of the domain than the eastern part. It also produced larger improvements in the upper troposphere. The overall magnitude of the ETKF ensemble spread agreed with the overall magnitude of the background forecast error. For individual variables and layers, the consistency between the spread and the error was less than the OSSE in Part I. Given the coarse resolution and relatively sparse observation network adopted in this study, caution is warranted when extrapolating these results to operational applications. A case study was also performed to further understand a large forecast improvement of the hybrid during the 4-week period. The flow-dependent adjustments produced by the hybrid extended a large distance into the eastern Pacific data-void region. The much improved analysis and forecast by the hybrid in the data void subsequently improved forecasts downstream in the region of verification. Although no moisture observations were assimilated, the hybrid updated the moisture fields flow dependently through cross-variable covariances defined by the ensemble, which improved the forecasts of cyclone development.
Abstract
The hybrid ensemble transform Kalman filter–three-dimensional variational data assimilation (ETKF–3DVAR) system developed for the Weather Research and Forecasting (WRF) Model was further tested with real observations, as a follow-up for the observation system simulation experiment (OSSE) conducted in Part I. A domain encompassing North America was considered. Because of limited computational resources and the large number of experiments conducted, the forecasts and analyses employed relatively coarse grid spacing (200 km) to emphasize synoptic scales. As a first effort to explore the new system with real observations, relatively sparse observation datasets consisting of radiosonde wind and temperature during 4 weeks of January 2003 were assimilated. The 12-h forecasts produced by the hybrid analysis produced less root-mean-square error than the 3DVAR. The hybrid improved the forecast more in the western part of the domain than the eastern part. It also produced larger improvements in the upper troposphere. The overall magnitude of the ETKF ensemble spread agreed with the overall magnitude of the background forecast error. For individual variables and layers, the consistency between the spread and the error was less than the OSSE in Part I. Given the coarse resolution and relatively sparse observation network adopted in this study, caution is warranted when extrapolating these results to operational applications. A case study was also performed to further understand a large forecast improvement of the hybrid during the 4-week period. The flow-dependent adjustments produced by the hybrid extended a large distance into the eastern Pacific data-void region. The much improved analysis and forecast by the hybrid in the data void subsequently improved forecasts downstream in the region of verification. Although no moisture observations were assimilated, the hybrid updated the moisture fields flow dependently through cross-variable covariances defined by the ensemble, which improved the forecasts of cyclone development.
Abstract
Two approaches for accounting for errors in quantitative precipitation forecasts (QPFs) due to uncertainty in the microphysics (MP) parameterization in a convection-allowing ensemble are examined. They include mixed MP (MMP) composed mostly of double-moment schemes and perturbing parameters within the Weather Research and Forecasting single-moment 6-class microphysics scheme (WSM6) MP scheme (PPMP). Thirty-five cases of real-time storm-scale ensemble forecasts produced by the Center for Analysis and Prediction of Storms during the NOAA Hazardous Weather Testbed 2011 Spring Experiment were examined.
The MMP ensemble had better fractions Brier scores (FBSs) for most lead times and thresholds, but the PPMP ensemble had better relative operating characteristic (ROC) scores for higher precipitation thresholds. The pooled ensemble formed by randomly drawing five members from the MMP and PPMP ensembles was no more skillful than the more accurate of the MMP and PPMP ensembles. Significant positive impact was found when the two were combined to form a larger ensemble.
The QPF and the systematic behaviors of derived microphysical variables were also examined. The skill of the QPF among different members depended on the thresholds, verification metrics, and forecast lead times. The profiles of microphysics variables from the double-moment schemes contained more variation in the vertical than those from the single-moment members. Among the double-moment schemes, WDM6 produced the smallest raindrops and very large number concentrations. Among the PPMP members, the behaviors were found to be consistent with the prescribed intercept parameters. The perturbed intercept parameters used in the PPMP ensemble fell within the range of values retrieved from the double-moment schemes.
Abstract
Two approaches for accounting for errors in quantitative precipitation forecasts (QPFs) due to uncertainty in the microphysics (MP) parameterization in a convection-allowing ensemble are examined. They include mixed MP (MMP) composed mostly of double-moment schemes and perturbing parameters within the Weather Research and Forecasting single-moment 6-class microphysics scheme (WSM6) MP scheme (PPMP). Thirty-five cases of real-time storm-scale ensemble forecasts produced by the Center for Analysis and Prediction of Storms during the NOAA Hazardous Weather Testbed 2011 Spring Experiment were examined.
The MMP ensemble had better fractions Brier scores (FBSs) for most lead times and thresholds, but the PPMP ensemble had better relative operating characteristic (ROC) scores for higher precipitation thresholds. The pooled ensemble formed by randomly drawing five members from the MMP and PPMP ensembles was no more skillful than the more accurate of the MMP and PPMP ensembles. Significant positive impact was found when the two were combined to form a larger ensemble.
The QPF and the systematic behaviors of derived microphysical variables were also examined. The skill of the QPF among different members depended on the thresholds, verification metrics, and forecast lead times. The profiles of microphysics variables from the double-moment schemes contained more variation in the vertical than those from the single-moment members. Among the double-moment schemes, WDM6 produced the smallest raindrops and very large number concentrations. Among the PPMP members, the behaviors were found to be consistent with the prescribed intercept parameters. The perturbed intercept parameters used in the PPMP ensemble fell within the range of values retrieved from the double-moment schemes.