Search Results
You are looking at 31 - 40 of 40 items for
- Author or Editor: Craig S. Schwartz x
- Refine by Access: All Content x
Abstract
Nine sets of 36-h, 10-member, convection-allowing ensemble (CAE) forecasts with 3-km horizontal grid spacing were produced over the conterminous United States for a 4-week period. These CAEs had identical configurations except for their initial conditions (ICs), which were constructed to isolate CAE forecast sensitivity to resolution of IC perturbations and central initial states about which IC perturbations were centered. The IC perturbations and central initial states were provided by limited-area ensemble Kalman filter (EnKF) analyses with both 15- and 3-km horizontal grid spacings, as well as from NCEP’s Global Forecast System (GFS) and Global Ensemble Forecast System. Given fixed-resolution IC perturbations, reducing horizontal grid spacing of central initial states improved ∼1–12-h precipitation forecasts. Conversely, for constant-resolution central initial states, reducing horizontal grid spacing of IC perturbations led to comparatively smaller short-term forecast improvements or none at all. Overall, all CAEs initially centered on 3-km EnKF mean analyses produced objectively better ∼1–12-h precipitation forecasts than CAEs initially centered on GFS or 15-km EnKF mean analyses regardless of IC perturbation resolution, strongly suggesting it is more important for central initial states to possess fine-scale structures than IC perturbations for short-term CAE forecasting applications, although fine-scale perturbations could potentially be critical for data assimilation purposes. These findings have important implications for future operational CAE forecast systems and suggest CAE IC development efforts focus on producing the best possible high-resolution deterministic analyses that can serve as central initial states for CAEs.
Significance Statement
Ensembles of weather model forecasts are composed of different “members” that, when combined, can produce probabilities that specific weather events will occur. Ensemble forecasts begin from specified atmospheric states, called initial conditions. For ensembles where initial conditions differ across members, the initial conditions can be viewed as a set of small perturbations added to a central state provided by a single model field. Our study suggests it is more important to increase horizontal resolution of the central state than resolution of the perturbations when initializing ensemble forecasts with 3-km horizontal grid spacing. These findings suggest a potential for computational savings and a streamlined process for improving high-resolution ensemble initial conditions.
Abstract
Nine sets of 36-h, 10-member, convection-allowing ensemble (CAE) forecasts with 3-km horizontal grid spacing were produced over the conterminous United States for a 4-week period. These CAEs had identical configurations except for their initial conditions (ICs), which were constructed to isolate CAE forecast sensitivity to resolution of IC perturbations and central initial states about which IC perturbations were centered. The IC perturbations and central initial states were provided by limited-area ensemble Kalman filter (EnKF) analyses with both 15- and 3-km horizontal grid spacings, as well as from NCEP’s Global Forecast System (GFS) and Global Ensemble Forecast System. Given fixed-resolution IC perturbations, reducing horizontal grid spacing of central initial states improved ∼1–12-h precipitation forecasts. Conversely, for constant-resolution central initial states, reducing horizontal grid spacing of IC perturbations led to comparatively smaller short-term forecast improvements or none at all. Overall, all CAEs initially centered on 3-km EnKF mean analyses produced objectively better ∼1–12-h precipitation forecasts than CAEs initially centered on GFS or 15-km EnKF mean analyses regardless of IC perturbation resolution, strongly suggesting it is more important for central initial states to possess fine-scale structures than IC perturbations for short-term CAE forecasting applications, although fine-scale perturbations could potentially be critical for data assimilation purposes. These findings have important implications for future operational CAE forecast systems and suggest CAE IC development efforts focus on producing the best possible high-resolution deterministic analyses that can serve as central initial states for CAEs.
Significance Statement
Ensembles of weather model forecasts are composed of different “members” that, when combined, can produce probabilities that specific weather events will occur. Ensemble forecasts begin from specified atmospheric states, called initial conditions. For ensembles where initial conditions differ across members, the initial conditions can be viewed as a set of small perturbations added to a central state provided by a single model field. Our study suggests it is more important to increase horizontal resolution of the central state than resolution of the perturbations when initializing ensemble forecasts with 3-km horizontal grid spacing. These findings suggest a potential for computational savings and a streamlined process for improving high-resolution ensemble initial conditions.
Abstract
Several limited-area 80-member ensemble Kalman filter (EnKF) data assimilation systems with 15-km horizontal grid spacing were run over a computational domain spanning the conterminous United States (CONUS) for a 4-week period. One EnKF employed continuous cycling, where the prior ensemble was always the 1-h forecast initialized from the previous cycle’s analysis. In contrast, the other EnKFs used a partial cycling procedure, where limited-area states were discarded after 12 or 18 h of self-contained hourly cycles and reinitialized the next day from global model fields. “Blended” states were also constructed by combining large scales from global ensemble initial conditions (ICs) with small scales from limited-area continuously cycling EnKF analyses using a low-pass filter. Both the blended states and EnKF analysis ensembles initialized 36-h, 10-member ensemble forecasts with 3-km horizontal grid spacing. Continuously cycling EnKF analyses initialized ∼1–18-h precipitation forecasts that were comparable to or somewhat better than those with partial cycling EnKF ICs. Conversely, ∼18–36-h forecasts with partial cycling EnKF ICs were comparable to or better than those with unblended continuously cycling EnKF ICs. However, blended ICs yielded ∼18–36-h forecasts that were statistically indistinguishable from those with partial cycling ICs. ICs that more closely resembled global analysis spectral characteristics at wavelengths > 200 km, like partial cycling and blended ICs, were associated with relatively good ∼18–36-h forecasts. Ultimately, findings suggest that EnKFs employing a combination of continuous cycling and blending can potentially replace the partial cycling assimilation systems that currently initialize operational limited-area models over the CONUS without sacrificing forecast quality.
SIGNIFICANCE STATEMENT
Numerical weather prediction models (i.e., weather models) are initialized through a process called data assimilation, which combines real atmospheric observations with a previous short-term weather model forecast using statistical techniques. The overarching data assimilation strategy currently used to initialize operational regional weather models over the United States has several disadvantages that ultimately limit progress toward improving weather model forecasts. Thus, we suggest an alternative data assimilation strategy be adopted to initialize a next-generation, high-resolution (∼3 km) probabilistic forecast system currently being developed. This alternative approach preserves forecast quality while fostering a framework that can accelerate weather model improvements, which in turn will lead to better weather forecasts.
Abstract
Several limited-area 80-member ensemble Kalman filter (EnKF) data assimilation systems with 15-km horizontal grid spacing were run over a computational domain spanning the conterminous United States (CONUS) for a 4-week period. One EnKF employed continuous cycling, where the prior ensemble was always the 1-h forecast initialized from the previous cycle’s analysis. In contrast, the other EnKFs used a partial cycling procedure, where limited-area states were discarded after 12 or 18 h of self-contained hourly cycles and reinitialized the next day from global model fields. “Blended” states were also constructed by combining large scales from global ensemble initial conditions (ICs) with small scales from limited-area continuously cycling EnKF analyses using a low-pass filter. Both the blended states and EnKF analysis ensembles initialized 36-h, 10-member ensemble forecasts with 3-km horizontal grid spacing. Continuously cycling EnKF analyses initialized ∼1–18-h precipitation forecasts that were comparable to or somewhat better than those with partial cycling EnKF ICs. Conversely, ∼18–36-h forecasts with partial cycling EnKF ICs were comparable to or better than those with unblended continuously cycling EnKF ICs. However, blended ICs yielded ∼18–36-h forecasts that were statistically indistinguishable from those with partial cycling ICs. ICs that more closely resembled global analysis spectral characteristics at wavelengths > 200 km, like partial cycling and blended ICs, were associated with relatively good ∼18–36-h forecasts. Ultimately, findings suggest that EnKFs employing a combination of continuous cycling and blending can potentially replace the partial cycling assimilation systems that currently initialize operational limited-area models over the CONUS without sacrificing forecast quality.
SIGNIFICANCE STATEMENT
Numerical weather prediction models (i.e., weather models) are initialized through a process called data assimilation, which combines real atmospheric observations with a previous short-term weather model forecast using statistical techniques. The overarching data assimilation strategy currently used to initialize operational regional weather models over the United States has several disadvantages that ultimately limit progress toward improving weather model forecasts. Thus, we suggest an alternative data assimilation strategy be adopted to initialize a next-generation, high-resolution (∼3 km) probabilistic forecast system currently being developed. This alternative approach preserves forecast quality while fostering a framework that can accelerate weather model improvements, which in turn will lead to better weather forecasts.
Abstract
During the 2007 NOAA Hazardous Weather Testbed Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced a daily 10-member 4-km horizontal resolution ensemble forecast covering approximately three-fourths of the continental United States. Each member used the Advanced Research version of the Weather Research and Forecasting (WRF-ARW) model core, which was initialized at 2100 UTC, ran for 33 h, and resolved convection explicitly. Different initial condition (IC), lateral boundary condition (LBC), and physics perturbations were introduced in 4 of the 10 ensemble members, while the remaining 6 members used identical ICs and LBCs, differing only in terms of microphysics (MP) and planetary boundary layer (PBL) parameterizations. This study focuses on precipitation forecasts from the ensemble.
The ensemble forecasts reveal WRF-ARW sensitivity to MP and PBL schemes. For example, over the 7-week experiment, the Mellor–Yamada–Janjić PBL and Ferrier MP parameterizations were associated with relatively high precipitation totals, while members configured with the Thompson MP or Yonsei University PBL scheme produced comparatively less precipitation. Additionally, different approaches for generating probabilistic ensemble guidance are explored. Specifically, a “neighborhood” approach is described and shown to considerably enhance the skill of probabilistic forecasts for precipitation when combined with a traditional technique of producing ensemble probability fields.
Abstract
During the 2007 NOAA Hazardous Weather Testbed Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced a daily 10-member 4-km horizontal resolution ensemble forecast covering approximately three-fourths of the continental United States. Each member used the Advanced Research version of the Weather Research and Forecasting (WRF-ARW) model core, which was initialized at 2100 UTC, ran for 33 h, and resolved convection explicitly. Different initial condition (IC), lateral boundary condition (LBC), and physics perturbations were introduced in 4 of the 10 ensemble members, while the remaining 6 members used identical ICs and LBCs, differing only in terms of microphysics (MP) and planetary boundary layer (PBL) parameterizations. This study focuses on precipitation forecasts from the ensemble.
The ensemble forecasts reveal WRF-ARW sensitivity to MP and PBL schemes. For example, over the 7-week experiment, the Mellor–Yamada–Janjić PBL and Ferrier MP parameterizations were associated with relatively high precipitation totals, while members configured with the Thompson MP or Yonsei University PBL scheme produced comparatively less precipitation. Additionally, different approaches for generating probabilistic ensemble guidance are explored. Specifically, a “neighborhood” approach is described and shown to considerably enhance the skill of probabilistic forecasts for precipitation when combined with a traditional technique of producing ensemble probability fields.
Abstract
While convective storm mode is explicitly depicted in convection-allowing model (CAM) output, subjectively diagnosing mode in large volumes of CAM forecasts can be burdensome. In this work, four machine learning (ML) models were trained to probabilistically classify CAM storms into one of three modes: supercells, quasi-linear convective systems, and disorganized convection. The four ML models included a dense neural network (DNN), logistic regression (LR), a convolutional neural network (CNN), and semisupervised CNN–Gaussian mixture model (GMM). The DNN, CNN, and LR were trained with a set of hand-labeled CAM storms, while the semisupervised GMM used updraft helicity and storm size to generate clusters, which were then hand labeled. When evaluated using storms withheld from training, the four classifiers had similar ability to discriminate between modes, but the GMM had worse calibration. The DNN and LR had similar objective performance to the CNN, suggesting that CNN-based methods may not be needed for mode classification tasks. The mode classifications from all four classifiers successfully approximated the known climatology of modes in the United States, including a maximum in supercell occurrence in the U.S. Central Plains. Further, the modes also occurred in environments recognized to support the three different storm morphologies. Finally, storm mode provided useful information about hazard type, e.g., storm reports were most likely with supercells, further supporting the efficacy of the classifiers. Future applications, including the use of objective CAM mode classifications as a novel predictor in ML systems, could potentially lead to improved forecasts of convective hazards.
Significance Statement
Whether a thunderstorm produces hazards such as tornadoes, hail, or intense wind gusts is in part determined by whether the storm takes the form of a single cell or a line. Numerical forecasting models can now provide forecasts that depict this structure. We tested several automated algorithms to extract this information from forecast output using machine learning. All of the automated methods were able to distinguish between a set of three convective types, with the simple techniques providing similarly skilled classifications compared to the complex approaches. The automated classifications also successfully discriminated between thunderstorm hazards, potentially leading to new forecast tools and better forecasts of high-impact convective hazards.
Abstract
While convective storm mode is explicitly depicted in convection-allowing model (CAM) output, subjectively diagnosing mode in large volumes of CAM forecasts can be burdensome. In this work, four machine learning (ML) models were trained to probabilistically classify CAM storms into one of three modes: supercells, quasi-linear convective systems, and disorganized convection. The four ML models included a dense neural network (DNN), logistic regression (LR), a convolutional neural network (CNN), and semisupervised CNN–Gaussian mixture model (GMM). The DNN, CNN, and LR were trained with a set of hand-labeled CAM storms, while the semisupervised GMM used updraft helicity and storm size to generate clusters, which were then hand labeled. When evaluated using storms withheld from training, the four classifiers had similar ability to discriminate between modes, but the GMM had worse calibration. The DNN and LR had similar objective performance to the CNN, suggesting that CNN-based methods may not be needed for mode classification tasks. The mode classifications from all four classifiers successfully approximated the known climatology of modes in the United States, including a maximum in supercell occurrence in the U.S. Central Plains. Further, the modes also occurred in environments recognized to support the three different storm morphologies. Finally, storm mode provided useful information about hazard type, e.g., storm reports were most likely with supercells, further supporting the efficacy of the classifiers. Future applications, including the use of objective CAM mode classifications as a novel predictor in ML systems, could potentially lead to improved forecasts of convective hazards.
Significance Statement
Whether a thunderstorm produces hazards such as tornadoes, hail, or intense wind gusts is in part determined by whether the storm takes the form of a single cell or a line. Numerical forecasting models can now provide forecasts that depict this structure. We tested several automated algorithms to extract this information from forecast output using machine learning. All of the automated methods were able to distinguish between a set of three convective types, with the simple techniques providing similarly skilled classifications compared to the complex approaches. The automated classifications also successfully discriminated between thunderstorm hazards, potentially leading to new forecast tools and better forecasts of high-impact convective hazards.
Abstract
During the 2007 NOAA Hazardous Weather Testbed (HWT) Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced convection-allowing forecasts from a single deterministic 2-km model and a 10-member 4-km-resolution ensemble. In this study, the 2-km deterministic output was compared with forecasts from the 4-km ensemble control member. Other than the difference in horizontal resolution, the two sets of forecasts featured identical Advanced Research Weather Research and Forecasting model (ARW-WRF) configurations, including vertical resolution, forecast domain, initial and lateral boundary conditions, and physical parameterizations. Therefore, forecast disparities were attributed solely to differences in horizontal grid spacing. This study is a follow-up to similar work that was based on results from the 2005 Spring Experiment. Unlike the 2005 experiment, however, model configurations were more rigorously controlled in the present study, providing a more robust dataset and a cleaner isolation of the dependence on horizontal resolution. Additionally, in this study, the 2- and 4-km outputs were compared with 12-km forecasts from the North American Mesoscale (NAM) model. Model forecasts were analyzed using objective verification of mean hourly precipitation and visual comparison of individual events, primarily during the 21- to 33-h forecast period to examine the utility of the models as next-day guidance. On average, both the 2- and 4-km model forecasts showed substantial improvement over the 12-km NAM. However, although the 2-km forecasts produced more-detailed structures on the smallest resolvable scales, the patterns of convective initiation, evolution, and organization were remarkably similar to the 4-km output. Moreover, on average, metrics such as equitable threat score, frequency bias, and fractions skill score revealed no statistical improvement of the 2-km forecasts compared to the 4-km forecasts. These results, based on the 2007 dataset, corroborate previous findings, suggesting that decreasing horizontal grid spacing from 4 to 2 km provides little added value as next-day guidance for severe convective storm and heavy rain forecasters in the United States.
Abstract
During the 2007 NOAA Hazardous Weather Testbed (HWT) Spring Experiment, the Center for Analysis and Prediction of Storms (CAPS) at the University of Oklahoma produced convection-allowing forecasts from a single deterministic 2-km model and a 10-member 4-km-resolution ensemble. In this study, the 2-km deterministic output was compared with forecasts from the 4-km ensemble control member. Other than the difference in horizontal resolution, the two sets of forecasts featured identical Advanced Research Weather Research and Forecasting model (ARW-WRF) configurations, including vertical resolution, forecast domain, initial and lateral boundary conditions, and physical parameterizations. Therefore, forecast disparities were attributed solely to differences in horizontal grid spacing. This study is a follow-up to similar work that was based on results from the 2005 Spring Experiment. Unlike the 2005 experiment, however, model configurations were more rigorously controlled in the present study, providing a more robust dataset and a cleaner isolation of the dependence on horizontal resolution. Additionally, in this study, the 2- and 4-km outputs were compared with 12-km forecasts from the North American Mesoscale (NAM) model. Model forecasts were analyzed using objective verification of mean hourly precipitation and visual comparison of individual events, primarily during the 21- to 33-h forecast period to examine the utility of the models as next-day guidance. On average, both the 2- and 4-km model forecasts showed substantial improvement over the 12-km NAM. However, although the 2-km forecasts produced more-detailed structures on the smallest resolvable scales, the patterns of convective initiation, evolution, and organization were remarkably similar to the 4-km output. Moreover, on average, metrics such as equitable threat score, frequency bias, and fractions skill score revealed no statistical improvement of the 2-km forecasts compared to the 4-km forecasts. These results, based on the 2007 dataset, corroborate previous findings, suggesting that decreasing horizontal grid spacing from 4 to 2 km provides little added value as next-day guidance for severe convective storm and heavy rain forecasters in the United States.
Abstract
As part of NOAA’s Hazardous Weather Testbed Spring Forecasting Experiment (SFE) in 2020, an international collaboration yielded a set of real-time convection-allowing model (CAM) forecasts over the contiguous United States in which the model configurations and initial/boundary conditions were varied in a controlled manner. Three model configurations were employed, among which the Finite Volume Cubed-Sphere (FV3), Unified Model (UM), and Advanced Research version of the Weather Research and Forecasting (WRF-ARW) Model dynamical cores were represented. Two runs were produced for each configuration: one driven by NOAA’s Global Forecast System for initial and boundary conditions, and the other driven by the Met Office’s operational global UM. For 32 cases during SFE2020, these runs were initialized at 0000 UTC and integrated for 36 h. Objective verification of model fields relevant to convective forecasting illuminates differences in the influence of configuration versus driving model pertinent to the ongoing problem of optimizing spread and skill in CAM ensembles. The UM and WRF configurations tend to outperform FV3 for forecasts of precipitation, thermodynamics, and simulated radar reflectivity; using a driving model with the native CAM core also tends to produce better skill in aggregate. Reflectivity and thermodynamic forecasts were found to cluster more by configuration than by driving model at lead times greater than 18 h. The two UM configuration experiments had notably similar solutions that, despite competitive aggregate skill, had large errors in the diurnal convective cycle.
Abstract
As part of NOAA’s Hazardous Weather Testbed Spring Forecasting Experiment (SFE) in 2020, an international collaboration yielded a set of real-time convection-allowing model (CAM) forecasts over the contiguous United States in which the model configurations and initial/boundary conditions were varied in a controlled manner. Three model configurations were employed, among which the Finite Volume Cubed-Sphere (FV3), Unified Model (UM), and Advanced Research version of the Weather Research and Forecasting (WRF-ARW) Model dynamical cores were represented. Two runs were produced for each configuration: one driven by NOAA’s Global Forecast System for initial and boundary conditions, and the other driven by the Met Office’s operational global UM. For 32 cases during SFE2020, these runs were initialized at 0000 UTC and integrated for 36 h. Objective verification of model fields relevant to convective forecasting illuminates differences in the influence of configuration versus driving model pertinent to the ongoing problem of optimizing spread and skill in CAM ensembles. The UM and WRF configurations tend to outperform FV3 for forecasts of precipitation, thermodynamics, and simulated radar reflectivity; using a driving model with the native CAM core also tends to produce better skill in aggregate. Reflectivity and thermodynamic forecasts were found to cluster more by configuration than by driving model at lead times greater than 18 h. The two UM configuration experiments had notably similar solutions that, despite competitive aggregate skill, had large errors in the diurnal convective cycle.
Abstract
During the 2005 NOAA Hazardous Weather Testbed Spring Experiment two different high-resolution configurations of the Weather Research and Forecasting-Advanced Research WRF (WRF-ARW) model were used to produce 30-h forecasts 5 days a week for a total of 7 weeks. These configurations used the same physical parameterizations and the same input dataset for the initial and boundary conditions, differing primarily in their spatial resolution. The first set of runs used 4-km horizontal grid spacing with 35 vertical levels while the second used 2-km grid spacing and 51 vertical levels.
Output from these daily forecasts is analyzed to assess the numerical forecast sensitivity to spatial resolution in the upper end of the convection-allowing range of grid spacing. The focus is on the central United States and the time period 18–30 h after model initialization. The analysis is based on a combination of visual comparison, systematic subjective verification conducted during the Spring Experiment, and objective metrics based largely on the mean diurnal cycle of the simulated reflectivity and precipitation fields. Additional insight is gained by examining the size distributions of the individual reflectivity and precipitation entities, and by comparing forecasts of mesocyclone occurrence in the two sets of forecasts.
In general, the 2-km forecasts provide more detailed presentations of convective activity, but there appears to be little, if any, forecast skill on the scales where the added details emerge. On the scales where both model configurations show higher levels of skill—the scale of mesoscale convective features—the numerical forecasts appear to provide comparable utility as guidance for severe weather forecasters. These results suggest that, for the geographical, phenomenological, and temporal parameters of this study, any added value provided by decreasing the grid increment from 4 to 2 km (with commensurate adjustments to the vertical resolution) may not be worth the considerable increases in computational expense.
Abstract
During the 2005 NOAA Hazardous Weather Testbed Spring Experiment two different high-resolution configurations of the Weather Research and Forecasting-Advanced Research WRF (WRF-ARW) model were used to produce 30-h forecasts 5 days a week for a total of 7 weeks. These configurations used the same physical parameterizations and the same input dataset for the initial and boundary conditions, differing primarily in their spatial resolution. The first set of runs used 4-km horizontal grid spacing with 35 vertical levels while the second used 2-km grid spacing and 51 vertical levels.
Output from these daily forecasts is analyzed to assess the numerical forecast sensitivity to spatial resolution in the upper end of the convection-allowing range of grid spacing. The focus is on the central United States and the time period 18–30 h after model initialization. The analysis is based on a combination of visual comparison, systematic subjective verification conducted during the Spring Experiment, and objective metrics based largely on the mean diurnal cycle of the simulated reflectivity and precipitation fields. Additional insight is gained by examining the size distributions of the individual reflectivity and precipitation entities, and by comparing forecasts of mesocyclone occurrence in the two sets of forecasts.
In general, the 2-km forecasts provide more detailed presentations of convective activity, but there appears to be little, if any, forecast skill on the scales where the added details emerge. On the scales where both model configurations show higher levels of skill—the scale of mesoscale convective features—the numerical forecasts appear to provide comparable utility as guidance for severe weather forecasters. These results suggest that, for the geographical, phenomenological, and temporal parameters of this study, any added value provided by decreasing the grid increment from 4 to 2 km (with commensurate adjustments to the vertical resolution) may not be worth the considerable increases in computational expense.
Abstract
The impacts of assimilating radar data and other mesoscale observations in real-time, convection-allowing model forecasts were evaluated during the spring seasons of 2008 and 2009 as part of the Hazardous Weather Test Bed Spring Experiment activities. In tests of a prototype continental U.S.-scale forecast system, focusing primarily on regions with active deep convection at the initial time, assimilation of these observations had a positive impact. Daily interrogation of output by teams of modelers, forecasters, and verification experts provided additional insights into the value-added characteristics of the unique assimilation forecasts. This evaluation revealed that the positive effects of the assimilation were greatest during the first 3–6 h of each forecast, appeared to be most pronounced with larger convective systems, and may have been related to a phase lag that sometimes developed when the convective-scale information was not assimilated. These preliminary results are currently being evaluated further using advanced objective verification techniques.
Abstract
The impacts of assimilating radar data and other mesoscale observations in real-time, convection-allowing model forecasts were evaluated during the spring seasons of 2008 and 2009 as part of the Hazardous Weather Test Bed Spring Experiment activities. In tests of a prototype continental U.S.-scale forecast system, focusing primarily on regions with active deep convection at the initial time, assimilation of these observations had a positive impact. Daily interrogation of output by teams of modelers, forecasters, and verification experts provided additional insights into the value-added characteristics of the unique assimilation forecasts. This evaluation revealed that the positive effects of the assimilation were greatest during the first 3–6 h of each forecast, appeared to be most pronounced with larger convective systems, and may have been related to a phase lag that sometimes developed when the convective-scale information was not assimilated. These preliminary results are currently being evaluated further using advanced objective verification techniques.
Abstract
The Mesoscale Predictability Experiment (MPEX) was conducted from 15 May to 15 June 2013 in the central United States. MPEX was motivated by the basic question of whether experimental, subsynoptic observations can extend convective-scale predictability and otherwise enhance skill in short-term regional numerical weather prediction.
Observational tools for MPEX included the National Science Foundation (NSF)–National Center for Atmospheric Research (NCAR) Gulfstream V aircraft (GV), which featured the Airborne Vertical Atmospheric Profiling System mini-dropsonde system and a microwave temperature-profiling (MTP) system as well as several ground-based mobile upsonde systems. Basic operations involved two missions per day: an early morning mission with the GV, well upstream of anticipated convective storms, and an afternoon and early evening mission with the mobile sounding units to sample the initiation and upscale feedbacks of the convection.
A total of 18 intensive observing periods (IOPs) were completed during the field phase, representing a wide spectrum of synoptic regimes and convective events, including several major severe weather and/or tornado outbreak days. The novel observational strategy employed during MPEX is documented herein, as is the unique role of the ensemble modeling efforts—which included an ensemble sensitivity analysis—to both guide the observational strategies and help address the potential impacts of such enhanced observations on short-term convective forecasting. Preliminary results of retrospective data assimilation experiments are discussed, as are data analyses showing upscale convective feedbacks.
Abstract
The Mesoscale Predictability Experiment (MPEX) was conducted from 15 May to 15 June 2013 in the central United States. MPEX was motivated by the basic question of whether experimental, subsynoptic observations can extend convective-scale predictability and otherwise enhance skill in short-term regional numerical weather prediction.
Observational tools for MPEX included the National Science Foundation (NSF)–National Center for Atmospheric Research (NCAR) Gulfstream V aircraft (GV), which featured the Airborne Vertical Atmospheric Profiling System mini-dropsonde system and a microwave temperature-profiling (MTP) system as well as several ground-based mobile upsonde systems. Basic operations involved two missions per day: an early morning mission with the GV, well upstream of anticipated convective storms, and an afternoon and early evening mission with the mobile sounding units to sample the initiation and upscale feedbacks of the convection.
A total of 18 intensive observing periods (IOPs) were completed during the field phase, representing a wide spectrum of synoptic regimes and convective events, including several major severe weather and/or tornado outbreak days. The novel observational strategy employed during MPEX is documented herein, as is the unique role of the ensemble modeling efforts—which included an ensemble sensitivity analysis—to both guide the observational strategies and help address the potential impacts of such enhanced observations on short-term convective forecasting. Preliminary results of retrospective data assimilation experiments are discussed, as are data analyses showing upscale convective feedbacks.
Abstract
Since its initial release in 2000, the Weather Research and Forecasting (WRF) Model has become one of the world’s most widely used numerical weather prediction models. Designed to serve both research and operational needs, it has grown to offer a spectrum of options and capabilities for a wide range of applications. In addition, it underlies a number of tailored systems that address Earth system modeling beyond weather. While the WRF Model has a centralized support effort, it has become a truly community model, driven by the developments and contributions of an active worldwide user base. The WRF Model sees significant use for operational forecasting, and its research implementations are pushing the boundaries of finescale atmospheric simulation. Future model directions include developments in physics, exploiting emerging compute technologies, and ever-innovative applications. From its contributions to research, forecasting, educational, and commercial efforts worldwide, the WRF Model has made a significant mark on numerical weather prediction and atmospheric science.
Abstract
Since its initial release in 2000, the Weather Research and Forecasting (WRF) Model has become one of the world’s most widely used numerical weather prediction models. Designed to serve both research and operational needs, it has grown to offer a spectrum of options and capabilities for a wide range of applications. In addition, it underlies a number of tailored systems that address Earth system modeling beyond weather. While the WRF Model has a centralized support effort, it has become a truly community model, driven by the developments and contributions of an active worldwide user base. The WRF Model sees significant use for operational forecasting, and its research implementations are pushing the boundaries of finescale atmospheric simulation. Future model directions include developments in physics, exploiting emerging compute technologies, and ever-innovative applications. From its contributions to research, forecasting, educational, and commercial efforts worldwide, the WRF Model has made a significant mark on numerical weather prediction and atmospheric science.