Browse
Abstract
In this paper, we present a new and more stable numerical implementation of the two-energy configuration of the Third Order Moments Unified Condensation and N-dependent Solver (TOUCANS) turbulence scheme. The original time-stepping scheme in TOUCANS tends to suffer from spurious oscillations in stably stratified turbulent flows. Because of their high frequency, the oscillations resemble the so-called fibrillations that are caused by the coupling between turbulent exchange coefficients and the stability parameter. However, our analysis and simulations show that the oscillations in the two-energy scheme are caused by the usage of a specific implicit–explicit temporal discretization for the relaxation terms. In TOUCANS, the relaxation technique is used on source and dissipation terms in prognostic turbulence energy equations to ensure numerical stability for relatively long time steps. We present both a detailed linear stability analysis and a bifurcation analysis, which indicate that the temporal discretization is oscillatory for time steps exceeding a critical time-step length. Based on these findings, we propose a new affordable time discretization of the involved terms that makes the scheme more implicit. This ensures stable solutions with enough accuracy for a wider range of time-step lengths. We confirm the analytical outcomes in both idealized 1D and full 3D model experiments.
Significance Statement
The vertical turbulent transport of momentum, heat, and moisture has to be parameterized in numerical weather prediction models. The parameterization typically employs nonlinear damping equations, whose numerical integration can lead to unphysical, time-oscillating solutions. In general, a presence of such numerical noise negatively affects the model performance. In our work, we address numerical issues of the recently developed scheme with two prognostic turbulence energies that have more realism and physical complexity. Specifically, we detect, explain, and design a numerical treatment for a new type of spurious oscillations that is connected to the temporal discretization. The treatment suppresses the oscillations and allows us to increase the model time step more than 4 times while keeping an essentially non-oscillatory solution.
Abstract
In this paper, we present a new and more stable numerical implementation of the two-energy configuration of the Third Order Moments Unified Condensation and N-dependent Solver (TOUCANS) turbulence scheme. The original time-stepping scheme in TOUCANS tends to suffer from spurious oscillations in stably stratified turbulent flows. Because of their high frequency, the oscillations resemble the so-called fibrillations that are caused by the coupling between turbulent exchange coefficients and the stability parameter. However, our analysis and simulations show that the oscillations in the two-energy scheme are caused by the usage of a specific implicit–explicit temporal discretization for the relaxation terms. In TOUCANS, the relaxation technique is used on source and dissipation terms in prognostic turbulence energy equations to ensure numerical stability for relatively long time steps. We present both a detailed linear stability analysis and a bifurcation analysis, which indicate that the temporal discretization is oscillatory for time steps exceeding a critical time-step length. Based on these findings, we propose a new affordable time discretization of the involved terms that makes the scheme more implicit. This ensures stable solutions with enough accuracy for a wider range of time-step lengths. We confirm the analytical outcomes in both idealized 1D and full 3D model experiments.
Significance Statement
The vertical turbulent transport of momentum, heat, and moisture has to be parameterized in numerical weather prediction models. The parameterization typically employs nonlinear damping equations, whose numerical integration can lead to unphysical, time-oscillating solutions. In general, a presence of such numerical noise negatively affects the model performance. In our work, we address numerical issues of the recently developed scheme with two prognostic turbulence energies that have more realism and physical complexity. Specifically, we detect, explain, and design a numerical treatment for a new type of spurious oscillations that is connected to the temporal discretization. The treatment suppresses the oscillations and allows us to increase the model time step more than 4 times while keeping an essentially non-oscillatory solution.
Abstract
Cold pools are mesoscale features that are key for understanding the organization of convection, but are insufficiently captured in conventional observations. This study conducts a statistical characterization of cold-pool passages observed at a 280-m-high boundary layer mast in Hamburg (Germany) and discusses factors controlling their signal strength. During 14 summer seasons 489 cold-pool events are identified from rapid temperature drops below −2 K associated with rainfall. The cold-pool activity exhibits distinct annual and diurnal cycles peaking in July and midafternoon, respectively. The median temperature perturbation is −3.3 K at 2-m height and weakens above. Also the increase in hydrostatic air pressure and specific humidity is largest near the surface. Extrapolation of the vertically weakening pressure signal suggests a characteristic cold-pool depth of about 750 m. Disturbances in the horizontal and vertical wind speed components document a lifting-induced circulation of air masses prior to the approaching cold-pool front. According to a correlation analysis, the near-surface temperature perturbation is more strongly controlled by the pre-event saturation deficit (r = −0.71) than by the event-accumulated rainfall amount (r = −0.35). Simulating the observed temperature drops as idealized wet-bulb processes suggests that evaporative cooling alone explains 64% of the variability in cold-pool strength. This number increases to 92% for cases that are not affected by advection of midtropospheric low-Θ e air masses under convective downdrafts.
Abstract
Cold pools are mesoscale features that are key for understanding the organization of convection, but are insufficiently captured in conventional observations. This study conducts a statistical characterization of cold-pool passages observed at a 280-m-high boundary layer mast in Hamburg (Germany) and discusses factors controlling their signal strength. During 14 summer seasons 489 cold-pool events are identified from rapid temperature drops below −2 K associated with rainfall. The cold-pool activity exhibits distinct annual and diurnal cycles peaking in July and midafternoon, respectively. The median temperature perturbation is −3.3 K at 2-m height and weakens above. Also the increase in hydrostatic air pressure and specific humidity is largest near the surface. Extrapolation of the vertically weakening pressure signal suggests a characteristic cold-pool depth of about 750 m. Disturbances in the horizontal and vertical wind speed components document a lifting-induced circulation of air masses prior to the approaching cold-pool front. According to a correlation analysis, the near-surface temperature perturbation is more strongly controlled by the pre-event saturation deficit (r = −0.71) than by the event-accumulated rainfall amount (r = −0.35). Simulating the observed temperature drops as idealized wet-bulb processes suggests that evaporative cooling alone explains 64% of the variability in cold-pool strength. This number increases to 92% for cases that are not affected by advection of midtropospheric low-Θ e air masses under convective downdrafts.
Abstract
The local impact of stochastic shallow convection on clouds and precipitation is tested in a case study over the tropical Atlantic on 20 December 2013 using the Icosahedral Nonhydrostatic Model (ICON). ICON is used at a grid resolution of 2.5 km and is tested in several configurations that differ in their treatment of shallow convection. A stochastic shallow convection scheme is compared to the operational deterministic scheme and a case with no representation of shallow convection. The model is evaluated by comparing synthetically generated irradiance data for both visible and infrared wavelengths against actual satellite observations. The experimental approach is designed to distinguish the local effects of parameterized shallow convection (or lack thereof) within the trades versus the ITCZ. The stochastic cases prove to be superior in reproducing low-level cloud cover, deep convection, and its organization, as well as the distribution of precipitation in the tropical Atlantic ITCZ. In these cases, convective heating in the subcloud layer is substantial, and boundary layer depth is increased as a result of the heating, while evaporation is enhanced at the expense of sensible heat flux at the ocean’s surface. The stochastic case where subgrid shallow convection is deactivated below the resolved deep updrafts indicates that local boundary layer convection is crucial for a better representation of deep convection. Based on these results, our study points to a necessity to further develop parameterizations of shallow convection for use at the convection-permitting resolutions and to assuredly include them in weather and climate models even as their imperfect versions.
Abstract
The local impact of stochastic shallow convection on clouds and precipitation is tested in a case study over the tropical Atlantic on 20 December 2013 using the Icosahedral Nonhydrostatic Model (ICON). ICON is used at a grid resolution of 2.5 km and is tested in several configurations that differ in their treatment of shallow convection. A stochastic shallow convection scheme is compared to the operational deterministic scheme and a case with no representation of shallow convection. The model is evaluated by comparing synthetically generated irradiance data for both visible and infrared wavelengths against actual satellite observations. The experimental approach is designed to distinguish the local effects of parameterized shallow convection (or lack thereof) within the trades versus the ITCZ. The stochastic cases prove to be superior in reproducing low-level cloud cover, deep convection, and its organization, as well as the distribution of precipitation in the tropical Atlantic ITCZ. In these cases, convective heating in the subcloud layer is substantial, and boundary layer depth is increased as a result of the heating, while evaporation is enhanced at the expense of sensible heat flux at the ocean’s surface. The stochastic case where subgrid shallow convection is deactivated below the resolved deep updrafts indicates that local boundary layer convection is crucial for a better representation of deep convection. Based on these results, our study points to a necessity to further develop parameterizations of shallow convection for use at the convection-permitting resolutions and to assuredly include them in weather and climate models even as their imperfect versions.
Abstract
Cloud-affected radiances from geostationary satellite sensors provide the first area-wide observable signal of convection with high spatial resolution in the range of kilometers and high temporal resolution in the range of minutes. However, these observations are not yet assimilated in operational convection-resolving weather prediction models as the rapid, nonlinear evolution of clouds makes the assimilation of related observations very challenging. To address these challenges, we investigate the assimilation of satellite radiances from visible and infrared channels in idealized observing system simulation experiments (OSSEs) for a day with summertime deep convection in central Europe. This constitutes the first study assimilating a combination of all-sky observations from infrared and visible satellite channels, and the experiments provide the opportunity to test various assimilation settings in an environment where the observation forward operator and the numerical model exhibit no systematic errors. The experiments provide insights into appropriate settings for the assimilation of cloud-affected satellite radiances in an ensemble data assimilation system and demonstrate the potential of these observations for convective-scale weather prediction. Both infrared and visible radiances individually lead to an overall forecast improvement, but best results are achieved with a combination of both observation types that provide complementary information on atmospheric clouds. This combination strongly improves the forecast of precipitation and other quantities throughout the whole range of 8-h lead time.
Abstract
Cloud-affected radiances from geostationary satellite sensors provide the first area-wide observable signal of convection with high spatial resolution in the range of kilometers and high temporal resolution in the range of minutes. However, these observations are not yet assimilated in operational convection-resolving weather prediction models as the rapid, nonlinear evolution of clouds makes the assimilation of related observations very challenging. To address these challenges, we investigate the assimilation of satellite radiances from visible and infrared channels in idealized observing system simulation experiments (OSSEs) for a day with summertime deep convection in central Europe. This constitutes the first study assimilating a combination of all-sky observations from infrared and visible satellite channels, and the experiments provide the opportunity to test various assimilation settings in an environment where the observation forward operator and the numerical model exhibit no systematic errors. The experiments provide insights into appropriate settings for the assimilation of cloud-affected satellite radiances in an ensemble data assimilation system and demonstrate the potential of these observations for convective-scale weather prediction. Both infrared and visible radiances individually lead to an overall forecast improvement, but best results are achieved with a combination of both observation types that provide complementary information on atmospheric clouds. This combination strongly improves the forecast of precipitation and other quantities throughout the whole range of 8-h lead time.
Abstract
We investigate the feasibility of addressing model error by perturbing and estimating uncertain static model parameters using the localized ensemble transform Kalman filter. In particular we use the augmented state approach, where parameters are updated by observations via their correlation with observed state variables. This online approach offers a flexible, yet consistent way to better fit model variables affected by the chosen parameters to observations, while ensuring feasible model states. We show in a nearly operational convection-permitting configuration that the prediction of clouds and precipitation with the COSMO-DE model is improved if the two-dimensional roughness length parameter is estimated with the augmented state approach. Here, the targeted model error is the roughness length itself and the surface fluxes, which influence the initiation of convection. At analysis time, Gaussian noise with a specified correlation matrix is added to the roughness length to regulate the parameter spread. In the northern part of the COSMO-DE domain, where the terrain is mostly flat and assimilated surface wind measurements are dense, estimating the roughness length led to improved forecasts of up to 6 h of clouds and precipitation. In the southern part of the domain, the parameter estimation was detrimental unless the correlation length scale of the Gaussian noise that is added to the roughness length is increased. The impact of the parameter estimation was found to be larger when synoptic forcing is weak and the model output is more sensitive to the roughness length.
Abstract
We investigate the feasibility of addressing model error by perturbing and estimating uncertain static model parameters using the localized ensemble transform Kalman filter. In particular we use the augmented state approach, where parameters are updated by observations via their correlation with observed state variables. This online approach offers a flexible, yet consistent way to better fit model variables affected by the chosen parameters to observations, while ensuring feasible model states. We show in a nearly operational convection-permitting configuration that the prediction of clouds and precipitation with the COSMO-DE model is improved if the two-dimensional roughness length parameter is estimated with the augmented state approach. Here, the targeted model error is the roughness length itself and the surface fluxes, which influence the initiation of convection. At analysis time, Gaussian noise with a specified correlation matrix is added to the roughness length to regulate the parameter spread. In the northern part of the COSMO-DE domain, where the terrain is mostly flat and assimilated surface wind measurements are dense, estimating the roughness length led to improved forecasts of up to 6 h of clouds and precipitation. In the southern part of the domain, the parameter estimation was detrimental unless the correlation length scale of the Gaussian noise that is added to the roughness length is increased. The impact of the parameter estimation was found to be larger when synoptic forcing is weak and the model output is more sensitive to the roughness length.
Abstract
State-of-the-art ensemble prediction systems usually provide ensembles with only 20–250 members for estimating the uncertainty of the forecast and its spatial and spatiotemporal covariance. Given that the degrees of freedom of atmospheric models are several magnitudes higher, the estimates are therefore substantially affected by sampling errors. For error covariances, spurious correlations lead to random sampling errors, but also a systematic overestimation of the correlation. A common approach to mitigate the impact of sampling errors for data assimilation is to localize correlations. However, this is a challenging task given that physical correlations in the atmosphere can extend over long distances. Besides data assimilation, sampling errors pose an issue for the investigation of spatiotemporal correlations using ensemble sensitivity analysis. Our study evaluates a statistical approach for correcting sampling errors. The applied sampling error correction is a lookup table–based approach and therefore computationally very efficient. We show that this approach substantially improves both the estimates of spatial correlations for data assimilation as well as spatiotemporal correlations for ensemble sensitivity analysis. The evaluation is performed using the first convective-scale 1000-member ensemble simulation for central Europe. Correlations of the 1000-member ensemble forecast serve as truth to assess the performance of the sampling error correction for smaller subsets of the full ensemble. The sampling error correction strongly reduced both random and systematic errors for all evaluated variables, ensemble sizes, and lead times.
Abstract
State-of-the-art ensemble prediction systems usually provide ensembles with only 20–250 members for estimating the uncertainty of the forecast and its spatial and spatiotemporal covariance. Given that the degrees of freedom of atmospheric models are several magnitudes higher, the estimates are therefore substantially affected by sampling errors. For error covariances, spurious correlations lead to random sampling errors, but also a systematic overestimation of the correlation. A common approach to mitigate the impact of sampling errors for data assimilation is to localize correlations. However, this is a challenging task given that physical correlations in the atmosphere can extend over long distances. Besides data assimilation, sampling errors pose an issue for the investigation of spatiotemporal correlations using ensemble sensitivity analysis. Our study evaluates a statistical approach for correcting sampling errors. The applied sampling error correction is a lookup table–based approach and therefore computationally very efficient. We show that this approach substantially improves both the estimates of spatial correlations for data assimilation as well as spatiotemporal correlations for ensemble sensitivity analysis. The evaluation is performed using the first convective-scale 1000-member ensemble simulation for central Europe. Correlations of the 1000-member ensemble forecast serve as truth to assess the performance of the sampling error correction for smaller subsets of the full ensemble. The sampling error correction strongly reduced both random and systematic errors for all evaluated variables, ensemble sizes, and lead times.
Abstract
Dropsonde observations from three research aircraft in the North Atlantic region, as well as several hundred additionally launched radiosondes over Canada and Europe, were collected during the international North Atlantic Waveguide and Downstream Impact Experiment (NAWDEX) in autumn 2016. In addition, over 1000 dropsondes were deployed during NOAA’s Sensing Hazards with Operational Unmanned Technology (SHOUT) and Reconnaissance missions in the west Atlantic basin, supplementing the conventional observing network for several intensive observation periods. This unique dataset was assimilated within the framework of cycled data denial experiments for a 1-month period performed with the global model of the ECMWF. Results show a slightly reduced mean forecast error (1%–3%) over the northern Atlantic and Europe by assimilating these additional observations, with the most prominent error reductions being linked to Tropical Storm Karl, Cyclones Matthew and Nicole, and their subsequent interaction with the midlatitude waveguide. The evaluation of Forecast Sensitivity to Observation Impact (FSOI) indicates that the largest impact is due to dropsondes near tropical storms and cyclones, followed by dropsondes over the northern Atlantic and additional Canadian radiosondes. Additional radiosondes over Europe showed a comparatively small beneficial impact.
Abstract
Dropsonde observations from three research aircraft in the North Atlantic region, as well as several hundred additionally launched radiosondes over Canada and Europe, were collected during the international North Atlantic Waveguide and Downstream Impact Experiment (NAWDEX) in autumn 2016. In addition, over 1000 dropsondes were deployed during NOAA’s Sensing Hazards with Operational Unmanned Technology (SHOUT) and Reconnaissance missions in the west Atlantic basin, supplementing the conventional observing network for several intensive observation periods. This unique dataset was assimilated within the framework of cycled data denial experiments for a 1-month period performed with the global model of the ECMWF. Results show a slightly reduced mean forecast error (1%–3%) over the northern Atlantic and Europe by assimilating these additional observations, with the most prominent error reductions being linked to Tropical Storm Karl, Cyclones Matthew and Nicole, and their subsequent interaction with the midlatitude waveguide. The evaluation of Forecast Sensitivity to Observation Impact (FSOI) indicates that the largest impact is due to dropsondes near tropical storms and cyclones, followed by dropsondes over the northern Atlantic and additional Canadian radiosondes. Additional radiosondes over Europe showed a comparatively small beneficial impact.
Abstract
We investigate the practical predictability limits of deep convection in a state-of-the-art, high-resolution, limited-area ensemble prediction system. A combination of sophisticated predictability measures, namely, believable and decorrelation scale, are applied to determine the predictable scales of short-term forecasts in a hierarchy of model configurations. First, we consider an idealized perfect model setup that includes both small-scale and synoptic-scale perturbations. We find increased predictability in the presence of orography and a strongly beneficial impact of radar data assimilation, which extends the forecast horizon by up to 6 h. Second, we examine realistic COSMO-KENDA simulations, including assimilation of radar and conventional data and a representation of model errors, for a convectively active two-week summer period over Germany. The results confirm increased predictability in orographic regions. We find that both latent heat nudging and ensemble Kalman filter assimilation of radar data lead to increased forecast skill, but the impact is smaller than in the idealized experiments. This highlights the need to assimilate spatially and temporally dense data, but also indicates room for further improvement. Finally, the examination of operational COSMO-DE-EPS ensemble forecasts for three summer periods confirms the beneficial impact of orography in a statistical sense and also reveals increased predictability in weather regimes controlled by synoptic forcing, as defined by the convective adjustment time scale.
Abstract
We investigate the practical predictability limits of deep convection in a state-of-the-art, high-resolution, limited-area ensemble prediction system. A combination of sophisticated predictability measures, namely, believable and decorrelation scale, are applied to determine the predictable scales of short-term forecasts in a hierarchy of model configurations. First, we consider an idealized perfect model setup that includes both small-scale and synoptic-scale perturbations. We find increased predictability in the presence of orography and a strongly beneficial impact of radar data assimilation, which extends the forecast horizon by up to 6 h. Second, we examine realistic COSMO-KENDA simulations, including assimilation of radar and conventional data and a representation of model errors, for a convectively active two-week summer period over Germany. The results confirm increased predictability in orographic regions. We find that both latent heat nudging and ensemble Kalman filter assimilation of radar data lead to increased forecast skill, but the impact is smaller than in the idealized experiments. This highlights the need to assimilate spatially and temporally dense data, but also indicates room for further improvement. Finally, the examination of operational COSMO-DE-EPS ensemble forecasts for three summer periods confirms the beneficial impact of orography in a statistical sense and also reveals increased predictability in weather regimes controlled by synoptic forcing, as defined by the convective adjustment time scale.
Abstract
Deep moist convection is an inherently multiscale phenomenon with organization processes coupling convective elements to larger-scale structures. A realistic representation of the tropical dynamics demands a simulation framework that is capable of representing physical processes across a wide range of scales. Therefore, storm-resolving numerical simulations at 2.4 km have been performed covering the tropical Atlantic and neighboring parts for 2 months. The simulated cloud fields are combined with infrared geostationary satellite observations, and their realism is assessed with the help of object-based evaluation methods. It is shown that the simulations are able to develop a well-defined intertropical convergence zone. However, marine convective activity measured by the cold cloud coverage is considerably underestimated, especially for the winter season and the western Atlantic. The spatial coupling across the resolved scales leads to simulated cloud number size distributions that follow power laws similar to the observations, with slopes steeper in winter than summer and slopes steeper over ocean than over land. The simulated slopes are, however, too steep, indicating too many small and too few large tropical cloud cells. It is also discussed that the number of larger cells is less influenced by multiday variability of environmental conditions. Despite the identified deficits, the analyzed simulations highlight the great potential of this modeling framework for process-based studies of tropical deep convection.
Abstract
Deep moist convection is an inherently multiscale phenomenon with organization processes coupling convective elements to larger-scale structures. A realistic representation of the tropical dynamics demands a simulation framework that is capable of representing physical processes across a wide range of scales. Therefore, storm-resolving numerical simulations at 2.4 km have been performed covering the tropical Atlantic and neighboring parts for 2 months. The simulated cloud fields are combined with infrared geostationary satellite observations, and their realism is assessed with the help of object-based evaluation methods. It is shown that the simulations are able to develop a well-defined intertropical convergence zone. However, marine convective activity measured by the cold cloud coverage is considerably underestimated, especially for the winter season and the western Atlantic. The spatial coupling across the resolved scales leads to simulated cloud number size distributions that follow power laws similar to the observations, with slopes steeper in winter than summer and slopes steeper over ocean than over land. The simulated slopes are, however, too steep, indicating too many small and too few large tropical cloud cells. It is also discussed that the number of larger cells is less influenced by multiday variability of environmental conditions. Despite the identified deficits, the analyzed simulations highlight the great potential of this modeling framework for process-based studies of tropical deep convection.
Abstract
Aircraft observations of wind and temperature collected by airport surveillance radars [Mode-S Enhanced Surveillance (Mode-S EHS)] were assimilated in the Consortium for Small-Scale Modeling Kilometre-scale Ensemble Data Assimilation (COSMO-KENDA), which couples an ensemble Kalman filter to a 40-member ensemble of the convection permitting COSMO-DE model. The number of observing aircrafts in Mode-S EHS was about 15 times larger than in the AMDAR system. In the comparison of both aircraft observation systems, a similar observation error standard deviation was diagnosed for wind. For temperature, a larger error was diagnosed for Mode-S EHS. With the high density of Mode-S EHS observations, a reduction of temperature and wind error in forecasts of 1 and 3 hours was found mainly in the flight level and less near the surface. The amount of Mode-S EHS data was reduced by random thinning to test the effect of a varying observation density. With the current data assimilation setup, a saturation of the forecast error reduction was apparent when more than 50% of the Mode-S EHS data were assimilated. Forecast kinetic energy spectra indicated that the reduction in error is related to analysis updates on all scales resolved by COSMO-DE.
Abstract
Aircraft observations of wind and temperature collected by airport surveillance radars [Mode-S Enhanced Surveillance (Mode-S EHS)] were assimilated in the Consortium for Small-Scale Modeling Kilometre-scale Ensemble Data Assimilation (COSMO-KENDA), which couples an ensemble Kalman filter to a 40-member ensemble of the convection permitting COSMO-DE model. The number of observing aircrafts in Mode-S EHS was about 15 times larger than in the AMDAR system. In the comparison of both aircraft observation systems, a similar observation error standard deviation was diagnosed for wind. For temperature, a larger error was diagnosed for Mode-S EHS. With the high density of Mode-S EHS observations, a reduction of temperature and wind error in forecasts of 1 and 3 hours was found mainly in the flight level and less near the surface. The amount of Mode-S EHS data was reduced by random thinning to test the effect of a varying observation density. With the current data assimilation setup, a saturation of the forecast error reduction was apparent when more than 50% of the Mode-S EHS data were assimilated. Forecast kinetic energy spectra indicated that the reduction in error is related to analysis updates on all scales resolved by COSMO-DE.