Browse
Abstract
In July 2021, Typhoon In-Fa (TIF) triggered a significant indirect heavy precipitation event (HPE) in central China and a direct HPE in eastern China. Both these events led to severe disasters. However, the synoptic-scale conditions and the impacts of these HPEs on future estimations of return periods remain poorly understood. Here, we find that the remote HPE that occurred ∼2200 km ahead of TIF over central China was a predecessor rain event (PRE). The PRE unfolded under the equatorward entrance of the upper-level westerly jet. This event, which encouraged divergent and adiabatic outflow in the upper level, subsequently intensified the strength of the upper-level westerly jet. In contrast, the direct HPE in eastern China was due primarily to the long duration and slow movement of TIF. The direct HPE occurred in areas situated less than 200 km from TIF’s center and to the left of TIF’s propagation trajectory. Anomaly analyses reveal favorable thermodynamic and dynamic conditions and abundant atmospheric moisture that sustained TIF’s intensity. A saddle-shaped pressure field in the north of eastern China and peripheral weak steering flow impeded TIF’s movement northward. Hydrologically, the inclusion of these two HPEs in the historical record leads to a decrease in the estimated return periods of similar HPEs. Our findings highlight the potential difficulties that HPEs could introduce for the design of hydraulic engineering infrastructure as well as for the disaster mitigation measures required to alleviate future risk, particularly in central China.
Abstract
In July 2021, Typhoon In-Fa (TIF) triggered a significant indirect heavy precipitation event (HPE) in central China and a direct HPE in eastern China. Both these events led to severe disasters. However, the synoptic-scale conditions and the impacts of these HPEs on future estimations of return periods remain poorly understood. Here, we find that the remote HPE that occurred ∼2200 km ahead of TIF over central China was a predecessor rain event (PRE). The PRE unfolded under the equatorward entrance of the upper-level westerly jet. This event, which encouraged divergent and adiabatic outflow in the upper level, subsequently intensified the strength of the upper-level westerly jet. In contrast, the direct HPE in eastern China was due primarily to the long duration and slow movement of TIF. The direct HPE occurred in areas situated less than 200 km from TIF’s center and to the left of TIF’s propagation trajectory. Anomaly analyses reveal favorable thermodynamic and dynamic conditions and abundant atmospheric moisture that sustained TIF’s intensity. A saddle-shaped pressure field in the north of eastern China and peripheral weak steering flow impeded TIF’s movement northward. Hydrologically, the inclusion of these two HPEs in the historical record leads to a decrease in the estimated return periods of similar HPEs. Our findings highlight the potential difficulties that HPEs could introduce for the design of hydraulic engineering infrastructure as well as for the disaster mitigation measures required to alleviate future risk, particularly in central China.
Abstract
Balloon-borne radiosondes are launched twice daily at coordinated times worldwide to assist with weather forecasting. Data collection from each flight is usually terminated when the balloon bursts at an altitude above 20 km. This paper highlights cases where the balloon’s turnaround occurs at lower altitudes and is associated with ice formation on the balloon, a weather condition of interest to aviation safety. Four examples of such cases are shown, where the balloon oscillates between 3- and 6-km altitude before rising to high altitudes and bursting. This oscillation is due to the accumulation and melting of ice on the balloon, causing the pattern to repeat multiple times. An analysis of National Weather Service radiosonde data over a 5-yr period and a global dataset from the National Centers for Environmental Information from 1980 to 2020 identified that 0.18% of soundings worldwide satisfied these criteria. This indicates that weather conditions important to aviation safety are not rare in the worldwide database. We recommend that soundings that show descent at altitudes lower than typically expected continue to be tracked, particularly given that these up–down-oscillating soundings can provide valuable information for weather forecasting on days with significant precipitation and icing conditions that might lead to aviation safety concerns.
Abstract
Balloon-borne radiosondes are launched twice daily at coordinated times worldwide to assist with weather forecasting. Data collection from each flight is usually terminated when the balloon bursts at an altitude above 20 km. This paper highlights cases where the balloon’s turnaround occurs at lower altitudes and is associated with ice formation on the balloon, a weather condition of interest to aviation safety. Four examples of such cases are shown, where the balloon oscillates between 3- and 6-km altitude before rising to high altitudes and bursting. This oscillation is due to the accumulation and melting of ice on the balloon, causing the pattern to repeat multiple times. An analysis of National Weather Service radiosonde data over a 5-yr period and a global dataset from the National Centers for Environmental Information from 1980 to 2020 identified that 0.18% of soundings worldwide satisfied these criteria. This indicates that weather conditions important to aviation safety are not rare in the worldwide database. We recommend that soundings that show descent at altitudes lower than typically expected continue to be tracked, particularly given that these up–down-oscillating soundings can provide valuable information for weather forecasting on days with significant precipitation and icing conditions that might lead to aviation safety concerns.
Abstract
Recent advances in hail trajectory modeling regularly produce datasets containing millions of hail trajectories. Because hail growth within a storm cannot be entirely separated from the structure of the trajectories producing it, a method to condense the multidimensionality of the trajectory information into a discrete number of features analyzable by humans is necessary. This article presents a three-dimensional trajectory clustering technique that is designed to group trajectories that have similar updraft-relative structures and orientations. The new technique is an application of a two-dimensional method common in the data mining field. Hail trajectories (or “parent” trajectories) are partitioned into segments before they are clustered using a modified version of the density-based spatial applications with noise (DBSCAN) method. Parent trajectories with segments that are members of at least two common clusters are then grouped into parent trajectory clusters before output. This multistep method has several advantages. Hail trajectories with structural similarities along only portions of their length, e.g., sourced from different locations around the updraft before converging to a common pathway, can still be grouped. However, the physical information inherent in the full length of the trajectory is retained, unlike methods that cluster trajectory segments alone. The conversion of trajectories to an updraft-relative space also allows trajectories separated in time to be clustered. Once the final output trajectory clusters are identified, a method for calculating a representative trajectory for each cluster is proposed. Cluster distributions of hailstone and environmental characteristics at each time step in the representative trajectory can also be calculated.
Significance Statement
To understand how a storm produces large hail, we need to understand the paths that hailstones take in a storm when growing. We can simulate these paths using computer models. However, the millions of hailstones in a simulated storm create millions of paths, which is hard to analyze. This article describes a machine learning method that groups together hailstone paths based on how similar their three-dimensional structures look. It will let hail scientists analyze hailstone pathways in storms more easily, and therefore better understand how hail growth happens.
Abstract
Recent advances in hail trajectory modeling regularly produce datasets containing millions of hail trajectories. Because hail growth within a storm cannot be entirely separated from the structure of the trajectories producing it, a method to condense the multidimensionality of the trajectory information into a discrete number of features analyzable by humans is necessary. This article presents a three-dimensional trajectory clustering technique that is designed to group trajectories that have similar updraft-relative structures and orientations. The new technique is an application of a two-dimensional method common in the data mining field. Hail trajectories (or “parent” trajectories) are partitioned into segments before they are clustered using a modified version of the density-based spatial applications with noise (DBSCAN) method. Parent trajectories with segments that are members of at least two common clusters are then grouped into parent trajectory clusters before output. This multistep method has several advantages. Hail trajectories with structural similarities along only portions of their length, e.g., sourced from different locations around the updraft before converging to a common pathway, can still be grouped. However, the physical information inherent in the full length of the trajectory is retained, unlike methods that cluster trajectory segments alone. The conversion of trajectories to an updraft-relative space also allows trajectories separated in time to be clustered. Once the final output trajectory clusters are identified, a method for calculating a representative trajectory for each cluster is proposed. Cluster distributions of hailstone and environmental characteristics at each time step in the representative trajectory can also be calculated.
Significance Statement
To understand how a storm produces large hail, we need to understand the paths that hailstones take in a storm when growing. We can simulate these paths using computer models. However, the millions of hailstones in a simulated storm create millions of paths, which is hard to analyze. This article describes a machine learning method that groups together hailstone paths based on how similar their three-dimensional structures look. It will let hail scientists analyze hailstone pathways in storms more easily, and therefore better understand how hail growth happens.
Abstract
To simulate the large-scale impacts of wind farms, wind turbines are parameterized within mesoscale models in which grid sizes are typically much larger than turbine scales. Five wind-farm parameterizations were implemented in the Weather Research and Forecasting (WRF) Model v4.3.3 to simulate multiple operational wind farms in the North Sea, which were verified against a satellite image, airborne measurements, and the FINO-1 meteorological mast data on 14 October 2017. The parameterization by Volker et al. underestimated the turbulence and wind speed deficit compared to measurements and to the parameterization of Fitch et al., which is the default in WRF. The Abkar and Porté-Agel parameterization gave close predictions of wind speed to that of Fitch et al. with a lower magnitude of predicted turbulence, although the parameterization was sensitive to a tunable constant. The parameterization by Pan and Archer resulted in turbine-induced thrust and turbulence that were slightly less than that of Fitch et al., but resulted in a substantial drop in power generation due to the magnification of wind speed differences in the power calculation. The parameterization by Redfern et al. was not substantially different from Fitch et al. in the absence of conditions such as strong wind veer. The simulations indicated the need for a turbine-induced turbulence source within a wind-farm parameterization for improved prediction of near-surface wind speed, near-surface temperature, and turbulence. The induced turbulence was responsible for enhancing turbulent momentum flux near the surface, causing a local speed-up of near-surface wind speed inside a wind farm. Our findings highlighted that wakes from large offshore wind farms could extend 100 km downwind, reducing downwind power production as in the case of the 400-MW Bard Offshore 1 wind farm whose power output was reduced by the wakes of the 402-MW Veja Mate wind farm for this case study.
Significance Statement
Because wind farms are smaller than the common grid spacing of numerical weather prediction models, the impacts of wind farms on the weather have to be indirectly incorporated through parameterizations. Several approaches to parameterization are available and the most appropriate scheme is not always clear. The absence of a turbulence source in a parameterization leads to substantial inaccuracies in predicting near-surface wind speed and turbulence over a wind farm. The impact of large clusters of offshore wind turbines in the wind field can exceed 100 km downwind, resulting in a substantial loss of power for downwind turbines. The prediction of this power loss can be sensitive to the chosen parameterization, contributing to uncertainty in wind-farm economic planning.
Abstract
To simulate the large-scale impacts of wind farms, wind turbines are parameterized within mesoscale models in which grid sizes are typically much larger than turbine scales. Five wind-farm parameterizations were implemented in the Weather Research and Forecasting (WRF) Model v4.3.3 to simulate multiple operational wind farms in the North Sea, which were verified against a satellite image, airborne measurements, and the FINO-1 meteorological mast data on 14 October 2017. The parameterization by Volker et al. underestimated the turbulence and wind speed deficit compared to measurements and to the parameterization of Fitch et al., which is the default in WRF. The Abkar and Porté-Agel parameterization gave close predictions of wind speed to that of Fitch et al. with a lower magnitude of predicted turbulence, although the parameterization was sensitive to a tunable constant. The parameterization by Pan and Archer resulted in turbine-induced thrust and turbulence that were slightly less than that of Fitch et al., but resulted in a substantial drop in power generation due to the magnification of wind speed differences in the power calculation. The parameterization by Redfern et al. was not substantially different from Fitch et al. in the absence of conditions such as strong wind veer. The simulations indicated the need for a turbine-induced turbulence source within a wind-farm parameterization for improved prediction of near-surface wind speed, near-surface temperature, and turbulence. The induced turbulence was responsible for enhancing turbulent momentum flux near the surface, causing a local speed-up of near-surface wind speed inside a wind farm. Our findings highlighted that wakes from large offshore wind farms could extend 100 km downwind, reducing downwind power production as in the case of the 400-MW Bard Offshore 1 wind farm whose power output was reduced by the wakes of the 402-MW Veja Mate wind farm for this case study.
Significance Statement
Because wind farms are smaller than the common grid spacing of numerical weather prediction models, the impacts of wind farms on the weather have to be indirectly incorporated through parameterizations. Several approaches to parameterization are available and the most appropriate scheme is not always clear. The absence of a turbulence source in a parameterization leads to substantial inaccuracies in predicting near-surface wind speed and turbulence over a wind farm. The impact of large clusters of offshore wind turbines in the wind field can exceed 100 km downwind, resulting in a substantial loss of power for downwind turbines. The prediction of this power loss can be sensitive to the chosen parameterization, contributing to uncertainty in wind-farm economic planning.
Abstract
Methods to improve the representation of hail in the Thompson–Eidhammer microphysics scheme are explored. A new two-moment and predicted density graupel category is implemented into the Thompson–Eidhammer scheme. Additionally, the one-moment graupel category’s intercept parameter is modified, based on hail observations, to shift the properties of the graupel category to become more hail-like since the category is designed to represent both graupel and hail. Finally, methods to diagnose maximum expected hail size at the surface and aloft are implemented. The original Thompson–Eidhammer version, the newly implemented two-moment and predicted density graupel version, and the modified (to be more hail-like) one-moment version are evaluated using a case that occurred during the Plains Elevated Convection at Night (PECAN) field campaign, during which hail-producing storms merged into a strong mesoscale convective system. The three versions of the scheme are evaluated for their ability to predict hail sizes compared to observed hail sizes from storm reports and estimated from radar, their ability to predict radar reflectivity signatures at various altitudes, and their ability to predict cold-pool features like temperature and wind speed. One key benefit of using the two-moment and predicted density graupel category is that the simulated reflectivity values in the upper levels of discrete storms are clearly improved. This improvement coincides with a significant reduction in the areal extent of graupel aloft, also seen when using the updated one-moment scheme. The two-moment and predicted density graupel scheme is also better able to predict a wide variety of hail sizes at the surface, including large (>2-in. diameter) hail that was observed during this case.
Abstract
Methods to improve the representation of hail in the Thompson–Eidhammer microphysics scheme are explored. A new two-moment and predicted density graupel category is implemented into the Thompson–Eidhammer scheme. Additionally, the one-moment graupel category’s intercept parameter is modified, based on hail observations, to shift the properties of the graupel category to become more hail-like since the category is designed to represent both graupel and hail. Finally, methods to diagnose maximum expected hail size at the surface and aloft are implemented. The original Thompson–Eidhammer version, the newly implemented two-moment and predicted density graupel version, and the modified (to be more hail-like) one-moment version are evaluated using a case that occurred during the Plains Elevated Convection at Night (PECAN) field campaign, during which hail-producing storms merged into a strong mesoscale convective system. The three versions of the scheme are evaluated for their ability to predict hail sizes compared to observed hail sizes from storm reports and estimated from radar, their ability to predict radar reflectivity signatures at various altitudes, and their ability to predict cold-pool features like temperature and wind speed. One key benefit of using the two-moment and predicted density graupel category is that the simulated reflectivity values in the upper levels of discrete storms are clearly improved. This improvement coincides with a significant reduction in the areal extent of graupel aloft, also seen when using the updated one-moment scheme. The two-moment and predicted density graupel scheme is also better able to predict a wide variety of hail sizes at the surface, including large (>2-in. diameter) hail that was observed during this case.
Abstract
For data assimilation to provide faithful state estimates for dynamical models, specifications of observation uncertainty need to be as accurate as possible. Innovation-based methods based on Desroziers diagnostics, are commonly used to estimate observation uncertainty, but such methods can depend greatly on the prescribed background uncertainty. For ensemble data assimilation, this uncertainty comes from statistics calculated from ensemble forecasts, which require inflation and localization to address under sampling. In this work, we use an ensemble Kalman filter (EnKF) with a low-dimensional Lorenz model to investigate the interplay between the Desroziers method and inflation. Two inflation techniques are used for this purpose: 1) a rigorously tuned fixed multiplicative scheme and 2) an adaptive state-space scheme. We document how inaccuracies in observation uncertainty affect errors in EnKF posteriors and study the combined impacts of misspecified initial observation uncertainty, sampling error, and model error on Desroziers estimates. We find that whether observation uncertainty is over- or underestimated greatly affects the stability of data assimilation and the accuracy of Desroziers estimates and that preference should be given to initial overestimates. Inline estimates of Desroziers tend to remove the dependence between ensemble spread–skill and the initially prescribed observation error. In addition, we find that the inclusion of model error introduces spurious correlations in observation uncertainty estimates. Further, we note that the adaptive inflation scheme is less robust than fixed inflation at mitigating multiple sources of error. Last, sampling error strongly exacerbates existing sources of error and greatly degrades EnKF estimates, which translates into biased Desroziers estimates of observation error covariance.
Significance Statement
To generate accurate predictions of various components of the Earth system, numerical models require an accurate specification of state variables at our current time. This step adopts a probabilistic consideration of our current state estimate versus information provided from environmental measurements of the true state. Various strategies exist for estimating uncertainty in observations within this framework, but are sensitive to a host of assumptions, which are investigated in this study.
Abstract
For data assimilation to provide faithful state estimates for dynamical models, specifications of observation uncertainty need to be as accurate as possible. Innovation-based methods based on Desroziers diagnostics, are commonly used to estimate observation uncertainty, but such methods can depend greatly on the prescribed background uncertainty. For ensemble data assimilation, this uncertainty comes from statistics calculated from ensemble forecasts, which require inflation and localization to address under sampling. In this work, we use an ensemble Kalman filter (EnKF) with a low-dimensional Lorenz model to investigate the interplay between the Desroziers method and inflation. Two inflation techniques are used for this purpose: 1) a rigorously tuned fixed multiplicative scheme and 2) an adaptive state-space scheme. We document how inaccuracies in observation uncertainty affect errors in EnKF posteriors and study the combined impacts of misspecified initial observation uncertainty, sampling error, and model error on Desroziers estimates. We find that whether observation uncertainty is over- or underestimated greatly affects the stability of data assimilation and the accuracy of Desroziers estimates and that preference should be given to initial overestimates. Inline estimates of Desroziers tend to remove the dependence between ensemble spread–skill and the initially prescribed observation error. In addition, we find that the inclusion of model error introduces spurious correlations in observation uncertainty estimates. Further, we note that the adaptive inflation scheme is less robust than fixed inflation at mitigating multiple sources of error. Last, sampling error strongly exacerbates existing sources of error and greatly degrades EnKF estimates, which translates into biased Desroziers estimates of observation error covariance.
Significance Statement
To generate accurate predictions of various components of the Earth system, numerical models require an accurate specification of state variables at our current time. This step adopts a probabilistic consideration of our current state estimate versus information provided from environmental measurements of the true state. Various strategies exist for estimating uncertainty in observations within this framework, but are sensitive to a host of assumptions, which are investigated in this study.
Abstract
Many studies have aimed to identify novel storm characteristics that are indicative of current or future severe weather potential using a combination of ground-based radar observations and severe reports. However, this is often done on a small scale using limited case studies on the order of tens to hundreds of storms due to how time-intensive this process is. Herein, we introduce the GridRad-Severe dataset, a database including ∼100 severe weather days per year and upward of 1.3 million objectively tracked storms from 2010 to 2019. Composite radar volumes spanning objectively determined, report-centered domains are created for each selected day using the GridRad compositing technique, with dates objectively determined using report thresholds defined to capture the highest-end severe weather days from each year, evenly distributed across all severe report types (tornadoes, severe hail, and severe wind). Spatiotemporal domain bounds for each event are objectively determined to encompass both the majority of reports and the time of convection initiation. Severe weather reports are matched to storms that are objectively tracked using the radar data, so the evolution of the storm cells and their severe weather production can be evaluated. Herein, we apply storm mode (single-cell, multicell, or mesoscale convective system storms) and right-moving supercell classification techniques to the dataset, and revisit various questions about severe storms and their bulk characteristics posed and evaluated in past work. Additional applications of this dataset are reviewed for possible future studies.
Abstract
Many studies have aimed to identify novel storm characteristics that are indicative of current or future severe weather potential using a combination of ground-based radar observations and severe reports. However, this is often done on a small scale using limited case studies on the order of tens to hundreds of storms due to how time-intensive this process is. Herein, we introduce the GridRad-Severe dataset, a database including ∼100 severe weather days per year and upward of 1.3 million objectively tracked storms from 2010 to 2019. Composite radar volumes spanning objectively determined, report-centered domains are created for each selected day using the GridRad compositing technique, with dates objectively determined using report thresholds defined to capture the highest-end severe weather days from each year, evenly distributed across all severe report types (tornadoes, severe hail, and severe wind). Spatiotemporal domain bounds for each event are objectively determined to encompass both the majority of reports and the time of convection initiation. Severe weather reports are matched to storms that are objectively tracked using the radar data, so the evolution of the storm cells and their severe weather production can be evaluated. Herein, we apply storm mode (single-cell, multicell, or mesoscale convective system storms) and right-moving supercell classification techniques to the dataset, and revisit various questions about severe storms and their bulk characteristics posed and evaluated in past work. Additional applications of this dataset are reviewed for possible future studies.
Abstract
Probabilistic forecasting is a common activity in many fields of the Earth sciences. Assessing the quality of probabilistic forecasts—probabilistic forecast verification—is therefore an essential task in these activities. Numerous methods and metrics have been proposed for this purpose; however, the probabilistic verification of vector variables of ensemble forecasts has received less attention than others. Here we introduce a new approach that is applicable for verifying ensemble forecasts of continuous, scalar, and two-dimensional vector data. The proposed method uses a fixed-radius near-neighbors search to compute two information-based scores, the ignorance score (the logarithmic score) and the information gain, which quantifies the skill gain from the reference forecast. Basic characteristics of the proposed scores were examined using idealized Monte Carlo simulations. The results indicated that both the continuous ranked probability score (CRPS) and the proposed score with a relatively small ensemble size (<25) are not proper in terms of the forecast dispersion. The proposed verification method was successfully used to verify the Madden–Julian oscillation index, which is a two-dimensional quantity. The proposed method is expected to advance probabilistic ensemble forecasts in various fields.
Significance Statement
In the Earth sciences, stochastic future states are estimated by solving a large number of forecasts (called ensemble forecasts) based on physical equations with slightly different initial conditions and stochastic parameters. The verification of probabilistic forecasts is an essential part of forecasting and modeling activity in the Earth sciences. However, there is no information-based probabilistic verification score applicable for vector variables of ensemble forecasts. The purpose of this study is to introduce a novel method for verifying scalar and two-dimensional vector variables of ensemble forecasts. The proposed method offers a new approach to probabilistic verification and is expected to advance probabilistic ensemble forecasts in various fields.
Abstract
Probabilistic forecasting is a common activity in many fields of the Earth sciences. Assessing the quality of probabilistic forecasts—probabilistic forecast verification—is therefore an essential task in these activities. Numerous methods and metrics have been proposed for this purpose; however, the probabilistic verification of vector variables of ensemble forecasts has received less attention than others. Here we introduce a new approach that is applicable for verifying ensemble forecasts of continuous, scalar, and two-dimensional vector data. The proposed method uses a fixed-radius near-neighbors search to compute two information-based scores, the ignorance score (the logarithmic score) and the information gain, which quantifies the skill gain from the reference forecast. Basic characteristics of the proposed scores were examined using idealized Monte Carlo simulations. The results indicated that both the continuous ranked probability score (CRPS) and the proposed score with a relatively small ensemble size (<25) are not proper in terms of the forecast dispersion. The proposed verification method was successfully used to verify the Madden–Julian oscillation index, which is a two-dimensional quantity. The proposed method is expected to advance probabilistic ensemble forecasts in various fields.
Significance Statement
In the Earth sciences, stochastic future states are estimated by solving a large number of forecasts (called ensemble forecasts) based on physical equations with slightly different initial conditions and stochastic parameters. The verification of probabilistic forecasts is an essential part of forecasting and modeling activity in the Earth sciences. However, there is no information-based probabilistic verification score applicable for vector variables of ensemble forecasts. The purpose of this study is to introduce a novel method for verifying scalar and two-dimensional vector variables of ensemble forecasts. The proposed method offers a new approach to probabilistic verification and is expected to advance probabilistic ensemble forecasts in various fields.
Abstract
NOAA has been developing a fully coupled Earth system model under the Unified Forecast System framework that will be responsible for global (ensemble) predictions at lead times of 0–35 days. The development has involved several prototype runs consisting of bimonthly initializations over a 7-yr period for a total of 168 cases. This study leverages these existing (baseline) prototypes to isolate the impact of substituting (one-at-a-time) parameterizations for convection, microphysics, and planetary boundary layer on 35-day forecasts. Through these physics sensitivity experiments, it is found that no particular configuration of the subseasonal-length coupled model is uniformly better or worse, based on several metrics including mean-state biases and skill scores for the Madden–Julian oscillation, precipitation, and 2-m temperature. Importantly, the spatial patterns of many “first-order” biases (e.g., impact of convection on precipitation) are remarkably similar between the end of the first week and weeks 3–4, indicating that some subseasonal biases may be mitigated through tuning at shorter time scales. This result, while shown for the first time in the context of subseasonal prediction with different physics schemes, is consistent with findings in climate models that some mean-state biases evident in multiyear averages can manifest in only a few days. An additional convective parameterization test using a different baseline shows that attempting to generalize results between or within modeling systems may be misguided. The limitations of generalizing results when testing physics schemes are most acute in modeling systems that undergo rapid, intense development from myriad contributors—as is the case in (quasi) operational environments.
Abstract
NOAA has been developing a fully coupled Earth system model under the Unified Forecast System framework that will be responsible for global (ensemble) predictions at lead times of 0–35 days. The development has involved several prototype runs consisting of bimonthly initializations over a 7-yr period for a total of 168 cases. This study leverages these existing (baseline) prototypes to isolate the impact of substituting (one-at-a-time) parameterizations for convection, microphysics, and planetary boundary layer on 35-day forecasts. Through these physics sensitivity experiments, it is found that no particular configuration of the subseasonal-length coupled model is uniformly better or worse, based on several metrics including mean-state biases and skill scores for the Madden–Julian oscillation, precipitation, and 2-m temperature. Importantly, the spatial patterns of many “first-order” biases (e.g., impact of convection on precipitation) are remarkably similar between the end of the first week and weeks 3–4, indicating that some subseasonal biases may be mitigated through tuning at shorter time scales. This result, while shown for the first time in the context of subseasonal prediction with different physics schemes, is consistent with findings in climate models that some mean-state biases evident in multiyear averages can manifest in only a few days. An additional convective parameterization test using a different baseline shows that attempting to generalize results between or within modeling systems may be misguided. The limitations of generalizing results when testing physics schemes are most acute in modeling systems that undergo rapid, intense development from myriad contributors—as is the case in (quasi) operational environments.
Abstract
Ensemble sensitivity analysis (ESA) is a numerical method by which the potential value of a single additional observation can be estimated using an ensemble numerical weather forecast. By performing ESA observation targeting on runs of the Texas Tech University WRF Ensemble from the spring of 2016, a dataset of predicted variance reductions (hereinafter referred to as target values) was obtained over approximately 6 weeks of convective forecasts for the central United States. It was then ascertained from these cases that the geographic variation in target values is large for any one case, with local maxima often several standard deviations higher than the mean and surrounded by sharp gradients. Radiosondes launched from the surface, then, would need to take this variation into account to properly sample a specific target by launching upstream of where the target is located rather than directly underneath. In many cases, the difference between the maximum target value in the vertical and the actual target value observed along the balloon path was multiple standard deviations. This may help to explain the lower-than-expected forecast improvements observed in previous ESA targeting experiments, especially the Mesoscale Predictability Experiment (MPEX). If target values are a good predictor of observation value, then it is possible that taking the balloon path into account in targeting systems for radiosonde deployment may substantially improve on the value added to the forecast by individual observations.
Abstract
Ensemble sensitivity analysis (ESA) is a numerical method by which the potential value of a single additional observation can be estimated using an ensemble numerical weather forecast. By performing ESA observation targeting on runs of the Texas Tech University WRF Ensemble from the spring of 2016, a dataset of predicted variance reductions (hereinafter referred to as target values) was obtained over approximately 6 weeks of convective forecasts for the central United States. It was then ascertained from these cases that the geographic variation in target values is large for any one case, with local maxima often several standard deviations higher than the mean and surrounded by sharp gradients. Radiosondes launched from the surface, then, would need to take this variation into account to properly sample a specific target by launching upstream of where the target is located rather than directly underneath. In many cases, the difference between the maximum target value in the vertical and the actual target value observed along the balloon path was multiple standard deviations. This may help to explain the lower-than-expected forecast improvements observed in previous ESA targeting experiments, especially the Mesoscale Predictability Experiment (MPEX). If target values are a good predictor of observation value, then it is possible that taking the balloon path into account in targeting systems for radiosonde deployment may substantially improve on the value added to the forecast by individual observations.