Browse
Abstract
This study investigates the transition from current practical predictability of midlatitude weather to its intrinsic limit. For this purpose, estimates of the current initial condition uncertainty of 12 real cases are reduced in several steps from 100% to 0.1% and propagated in time with a global numerical weather prediction model (ICON at 40 km resolution) that is extended by a stochastic convection scheme to better represent error growth from unresolved motions. With the provision that the perfect model assumption is sufficiently valid, it is found that the potential forecast improvement that could be obtained by perfecting the initial conditions is 4–5 days. This improvement is essentially achieved with an initial condition uncertainty reduction by 90% relative to current conditions, at which point the dominant error growth mechanism changes: With respect to physical processes, a transition occurs from rotationally driven initial error growth to error growth dominated by latent heat release in convection and due to the divergent component of the flow. With respect to spatial scales, a transition from large-scale up-amplitude error growth to a very rapid initial error growth on small scales is found. Reference experiments with a deterministic convection scheme show a 5%–10% longer predictability, but only if the initial condition uncertainty is small. These results confirm that planetary-scale predictability is intrinsically limited by rapid error growth due to latent heat release in clouds through an upscale-interaction process, while this interaction process is unimportant on average for current levels of initial condition uncertainty.
Significance Statement
Weather predictions provide high socioeconomic value and have been greatly improved over the last decades. However, it is widely believed that there is an intrinsic limit to how far into the future the weather can be predicted. Using numerical simulations with an innovative representation of convection, we are able to confirm the existence of this limit and to demonstrate which physical processes are responsible. We further provide quantitative estimates for the limit and the remaining improvement potential. These results make clear that our current weather prediction capabilities are not yet maxed out and could still be significantly improved with advancements in atmospheric observation and simulation technology in the upcoming decades.
Abstract
This study investigates the transition from current practical predictability of midlatitude weather to its intrinsic limit. For this purpose, estimates of the current initial condition uncertainty of 12 real cases are reduced in several steps from 100% to 0.1% and propagated in time with a global numerical weather prediction model (ICON at 40 km resolution) that is extended by a stochastic convection scheme to better represent error growth from unresolved motions. With the provision that the perfect model assumption is sufficiently valid, it is found that the potential forecast improvement that could be obtained by perfecting the initial conditions is 4–5 days. This improvement is essentially achieved with an initial condition uncertainty reduction by 90% relative to current conditions, at which point the dominant error growth mechanism changes: With respect to physical processes, a transition occurs from rotationally driven initial error growth to error growth dominated by latent heat release in convection and due to the divergent component of the flow. With respect to spatial scales, a transition from large-scale up-amplitude error growth to a very rapid initial error growth on small scales is found. Reference experiments with a deterministic convection scheme show a 5%–10% longer predictability, but only if the initial condition uncertainty is small. These results confirm that planetary-scale predictability is intrinsically limited by rapid error growth due to latent heat release in clouds through an upscale-interaction process, while this interaction process is unimportant on average for current levels of initial condition uncertainty.
Significance Statement
Weather predictions provide high socioeconomic value and have been greatly improved over the last decades. However, it is widely believed that there is an intrinsic limit to how far into the future the weather can be predicted. Using numerical simulations with an innovative representation of convection, we are able to confirm the existence of this limit and to demonstrate which physical processes are responsible. We further provide quantitative estimates for the limit and the remaining improvement potential. These results make clear that our current weather prediction capabilities are not yet maxed out and could still be significantly improved with advancements in atmospheric observation and simulation technology in the upcoming decades.
Abstract
Warm conveyor belts (WCBs) associated with extratropical cyclones transport air from the lower troposphere into the tropopause region and contribute to upper-level ridge building and the formation of blocking anticyclones. Recent studies indicate that this constitutes an important source and magnifier of forecast uncertainty and errors in numerical weather prediction (NWP) models. However, a systematic evaluation of the representation of WCBs in NWP models has yet to be determined. Here, we employ the logistic regression models developed in Part I to identify the inflow, ascent, and outflow stages of WCBs in the European Centre for Medium-Range Weather Forecasts (ECMWF) subseasonal reforecasts for Northern Hemisphere winter in the period January 1997 to December 2017. We verify the representation of these WCB stages in terms of systematic occurrence frequency biases, forecast reliability, and forecast skill. Systematic WCB frequency biases emerge already at early lead times of around 3 days with an underestimation for the WCB outflow over the North Atlantic and eastern North Pacific of around 40% relative to climatology. Biases in the predictor variables of the logistic regression models can partially explain these biases in WCB inflow, ascent, or outflow. Despite an overconfidence in predicting high WCB probabilities, skillful WCB forecasts are on average possible up to a lead time of 8–10 days with more skill over the North Pacific compared to the North Atlantic region. Our results corroborate that the current limited forecast skill for the large-scale extratropical circulation on subseasonal time scales beyond 10 days might be tied to the representation of WCBs and associated upscale error growth.
Abstract
Warm conveyor belts (WCBs) associated with extratropical cyclones transport air from the lower troposphere into the tropopause region and contribute to upper-level ridge building and the formation of blocking anticyclones. Recent studies indicate that this constitutes an important source and magnifier of forecast uncertainty and errors in numerical weather prediction (NWP) models. However, a systematic evaluation of the representation of WCBs in NWP models has yet to be determined. Here, we employ the logistic regression models developed in Part I to identify the inflow, ascent, and outflow stages of WCBs in the European Centre for Medium-Range Weather Forecasts (ECMWF) subseasonal reforecasts for Northern Hemisphere winter in the period January 1997 to December 2017. We verify the representation of these WCB stages in terms of systematic occurrence frequency biases, forecast reliability, and forecast skill. Systematic WCB frequency biases emerge already at early lead times of around 3 days with an underestimation for the WCB outflow over the North Atlantic and eastern North Pacific of around 40% relative to climatology. Biases in the predictor variables of the logistic regression models can partially explain these biases in WCB inflow, ascent, or outflow. Despite an overconfidence in predicting high WCB probabilities, skillful WCB forecasts are on average possible up to a lead time of 8–10 days with more skill over the North Pacific compared to the North Atlantic region. Our results corroborate that the current limited forecast skill for the large-scale extratropical circulation on subseasonal time scales beyond 10 days might be tied to the representation of WCBs and associated upscale error growth.
Abstract
The waveguidability of an upper-tropospheric zonal jet quantifies its propensity to duct Rossby waves in the zonal direction. This property has played a central role in previous attempts to explain large wave amplitudes and the subsequent occurrence of extreme weather. In these studies, waveguidability was diagnosed with the help of ray tracing arguments using the zonal average of the observed flow as the relevant background state. Here, it is argued that this method is problematic both conceptually and mathematically. The issue is investigated in the framework of the nondivergent barotropic model. This model allows the straightforward computation of an alternative “zonalized” background state, which is obtained through conservative symmetrization of potential vorticity contours and that is argued to be superior to the zonal average. Using an idealized prototypical flow configuration with large-amplitude eddies, it is shown that the two different choices for the background state yield very different results; in particular, the zonal-mean background state diagnoses a zonal waveguide, while the zonalized background state does not. This result suggests that the existence of a waveguide in the zonal-mean background state is a consequence of, rather than a precondition for, large wave amplitudes, and it would mean that the direction of causality is opposite to the usual argument. The analysis is applied to two heatwave episodes from summer 2003 and 2010, yielding essentially the same result. It is concluded that previous arguments about the role of waveguidability for extreme weather need to be carefully reevaluated to prevent misinterpretation in the future.
Abstract
The waveguidability of an upper-tropospheric zonal jet quantifies its propensity to duct Rossby waves in the zonal direction. This property has played a central role in previous attempts to explain large wave amplitudes and the subsequent occurrence of extreme weather. In these studies, waveguidability was diagnosed with the help of ray tracing arguments using the zonal average of the observed flow as the relevant background state. Here, it is argued that this method is problematic both conceptually and mathematically. The issue is investigated in the framework of the nondivergent barotropic model. This model allows the straightforward computation of an alternative “zonalized” background state, which is obtained through conservative symmetrization of potential vorticity contours and that is argued to be superior to the zonal average. Using an idealized prototypical flow configuration with large-amplitude eddies, it is shown that the two different choices for the background state yield very different results; in particular, the zonal-mean background state diagnoses a zonal waveguide, while the zonalized background state does not. This result suggests that the existence of a waveguide in the zonal-mean background state is a consequence of, rather than a precondition for, large wave amplitudes, and it would mean that the direction of causality is opposite to the usual argument. The analysis is applied to two heatwave episodes from summer 2003 and 2010, yielding essentially the same result. It is concluded that previous arguments about the role of waveguidability for extreme weather need to be carefully reevaluated to prevent misinterpretation in the future.
Abstract
The physical and dynamical processes associated with warm conveyor belts (WCBs) importantly affect midlatitude dynamics and are sources of forecast uncertainty. Moreover, WCBs modulate the large-scale extratropical circulation and can communicate and amplify forecast errors. Therefore, it is desirable to assess the representation of WCBs in numerical weather prediction (NWP) models in particular on the medium to subseasonal forecast range. Most often, WCBs are identified as coherent bundles of Lagrangian trajectories that ascend in a time interval of 2 days from the lower to the upper troposphere. Although this Lagrangian approach has advanced the understanding of the involved processes significantly, the calculation of trajectories is computationally expensive and requires NWP data at a high spatial [
Abstract
The physical and dynamical processes associated with warm conveyor belts (WCBs) importantly affect midlatitude dynamics and are sources of forecast uncertainty. Moreover, WCBs modulate the large-scale extratropical circulation and can communicate and amplify forecast errors. Therefore, it is desirable to assess the representation of WCBs in numerical weather prediction (NWP) models in particular on the medium to subseasonal forecast range. Most often, WCBs are identified as coherent bundles of Lagrangian trajectories that ascend in a time interval of 2 days from the lower to the upper troposphere. Although this Lagrangian approach has advanced the understanding of the involved processes significantly, the calculation of trajectories is computationally expensive and requires NWP data at a high spatial [
Abstract
Global model simulations together with a stochastic convection scheme are used to assess the intrinsic limit of predictability that originates from convection up to planetary scales. The stochastic convection scheme has been shown to introduce an appropriate amount of variability onto the model grid without the need to resolve the convection explicitly. This largely reduces computational costs and enables a set of 12 cases equally distributed over 1 year with five ensemble members for each case, generated by the stochastic convection scheme. As a metric, difference kinetic energy at 300 hPa over the midlatitudes, both north and south, is used. With this metric the intrinsic limit is estimated to be about 17 days when a threshold of 80% of the saturation level is applied. The error level at 3.5 days roughly compares to the initial-condition uncertainty of the current ECMWF data assimilation system, which suggests a potential improvement of 3.5 forecast days through perfecting the initial conditions. Error-growth experiments that use a deterministic convection scheme show smaller errors of about half the size at early forecast times and an estimate of intrinsic predictability that is about 10% longer, confirming the overconfidence of deterministic convection schemes.
Abstract
Global model simulations together with a stochastic convection scheme are used to assess the intrinsic limit of predictability that originates from convection up to planetary scales. The stochastic convection scheme has been shown to introduce an appropriate amount of variability onto the model grid without the need to resolve the convection explicitly. This largely reduces computational costs and enables a set of 12 cases equally distributed over 1 year with five ensemble members for each case, generated by the stochastic convection scheme. As a metric, difference kinetic energy at 300 hPa over the midlatitudes, both north and south, is used. With this metric the intrinsic limit is estimated to be about 17 days when a threshold of 80% of the saturation level is applied. The error level at 3.5 days roughly compares to the initial-condition uncertainty of the current ECMWF data assimilation system, which suggests a potential improvement of 3.5 forecast days through perfecting the initial conditions. Error-growth experiments that use a deterministic convection scheme show smaller errors of about half the size at early forecast times and an estimate of intrinsic predictability that is about 10% longer, confirming the overconfidence of deterministic convection schemes.
Abstract
Research on the mesoscale kinetic energy spectrum over the past few decades has focused on finding a dynamical mechanism that gives rise to a universal spectral slope. Here we investigate the variability of the spectrum using 3 years of kilometer-resolution analyses from COSMO configured for Germany (COSMO-DE). It is shown that the mesoscale kinetic energy spectrum is highly variable in time but that a minimum in variability is found on scales around 100 km. The high variability found on the small-scale end of the spectrum (around 10 km) is positively correlated with the precipitation rate where convection is a strong source of variance. On the other hand, variability on the large-scale end (around 1000 km) is correlated with the potential vorticity, as expected for geostrophically balanced flows. Accordingly, precipitation at small scales is more highly correlated with divergent kinetic energy, and potential vorticity at large scales is more highly correlated with rotational kinetic energy. The presented findings suggest that the spectral slope and amplitude on the mesoscale range are governed by an ever-changing combination of the upscale and downscale impacts of these large- and small-scale dynamical processes rather than by a universal, intrinsically mesoscale dynamical mechanism.
Abstract
Research on the mesoscale kinetic energy spectrum over the past few decades has focused on finding a dynamical mechanism that gives rise to a universal spectral slope. Here we investigate the variability of the spectrum using 3 years of kilometer-resolution analyses from COSMO configured for Germany (COSMO-DE). It is shown that the mesoscale kinetic energy spectrum is highly variable in time but that a minimum in variability is found on scales around 100 km. The high variability found on the small-scale end of the spectrum (around 10 km) is positively correlated with the precipitation rate where convection is a strong source of variance. On the other hand, variability on the large-scale end (around 1000 km) is correlated with the potential vorticity, as expected for geostrophically balanced flows. Accordingly, precipitation at small scales is more highly correlated with divergent kinetic energy, and potential vorticity at large scales is more highly correlated with rotational kinetic energy. The presented findings suggest that the spectral slope and amplitude on the mesoscale range are governed by an ever-changing combination of the upscale and downscale impacts of these large- and small-scale dynamical processes rather than by a universal, intrinsically mesoscale dynamical mechanism.
Abstract
The response of clouds to changes in the aerosol concentration is complex and may differ depending on the cloud type, the aerosol regime, and environmental conditions. In this study, a novel technique is used to systematically modify the environmental conditions in realistic convection-resolving simulations for cases with weak and strong large-scale forcing over central Europe with the Consortium for Small-Scale Modeling (COSMO) model. Besides control runs with quasi-operational settings, initial and boundary temperature profiles are modified with linear increasing temperature increments from 0 to 5 K between 3 and 12 km AGL to represent different amounts of convective available potential energy (CAPE) and relative humidity. The results show a systematic decrease of total precipitation with increasing cloud condensation nuclei (CCN) concentrations for the cases with strong synoptic forcing caused by a suppressed warm-rain process, whereas no systematic aerosol effect is simulated for weak synoptic forcing. The effect of increasing CCN tends to be stronger in the simulations with increased temperatures and lower CAPE. While the large-scale domain-averaged responses to increased CCN are weak, the precipitation forming over mountainous terrain reveals a stronger sensitivity for most of the analyzed cases. Our findings also demonstrate that the role of the warm-rain process is more important for strong than for weak synoptic forcing. The aerosol effect is largest for weakly forced conditions but more predictable for the strongly forced cases. However, more accurate environmental conditions are much more important than accurate aerosol assumptions, especially for weak large-scale forcing.
Abstract
The response of clouds to changes in the aerosol concentration is complex and may differ depending on the cloud type, the aerosol regime, and environmental conditions. In this study, a novel technique is used to systematically modify the environmental conditions in realistic convection-resolving simulations for cases with weak and strong large-scale forcing over central Europe with the Consortium for Small-Scale Modeling (COSMO) model. Besides control runs with quasi-operational settings, initial and boundary temperature profiles are modified with linear increasing temperature increments from 0 to 5 K between 3 and 12 km AGL to represent different amounts of convective available potential energy (CAPE) and relative humidity. The results show a systematic decrease of total precipitation with increasing cloud condensation nuclei (CCN) concentrations for the cases with strong synoptic forcing caused by a suppressed warm-rain process, whereas no systematic aerosol effect is simulated for weak synoptic forcing. The effect of increasing CCN tends to be stronger in the simulations with increased temperatures and lower CAPE. While the large-scale domain-averaged responses to increased CCN are weak, the precipitation forming over mountainous terrain reveals a stronger sensitivity for most of the analyzed cases. Our findings also demonstrate that the role of the warm-rain process is more important for strong than for weak synoptic forcing. The aerosol effect is largest for weakly forced conditions but more predictable for the strongly forced cases. However, more accurate environmental conditions are much more important than accurate aerosol assumptions, especially for weak large-scale forcing.
Abstract
The statistical theory of convective variability developed by Craig and Cohen in 2006 has provided a promising foundation for the design of stochastic parameterizations. The simplifying assumptions of this theory, however, were made with tropical equilibrium convection in mind. This study investigates the predictions of the statistical theory in real-weather case studies of nonequilibrium summertime convection over land. For this purpose, a convection-permitting ensemble is used in which all members share the same large-scale weather conditions but the convection is displaced using stochastic boundary layer perturbations. The results show that the standard deviation of the domain-integrated mass flux is proportional to the square root of its mean over a wide range of scales. This confirms the general applicability and scale adaptivity of the Craig and Cohen theory for complex weather. However, clouds tend to cluster on scales of around 100 km, particularly in the morning and evening. This strongly impacts the theoretical predictions of the variability, which does not include clustering. Furthermore, the mass flux per cloud closely follows an exponential distribution if all clouds are considered together and if overlapping cloud objects are separated. The nonseparated cloud mass flux distribution resembles a power law. These findings support the use of the theory for stochastic parameterizations but also highlight areas for improvement.
Abstract
The statistical theory of convective variability developed by Craig and Cohen in 2006 has provided a promising foundation for the design of stochastic parameterizations. The simplifying assumptions of this theory, however, were made with tropical equilibrium convection in mind. This study investigates the predictions of the statistical theory in real-weather case studies of nonequilibrium summertime convection over land. For this purpose, a convection-permitting ensemble is used in which all members share the same large-scale weather conditions but the convection is displaced using stochastic boundary layer perturbations. The results show that the standard deviation of the domain-integrated mass flux is proportional to the square root of its mean over a wide range of scales. This confirms the general applicability and scale adaptivity of the Craig and Cohen theory for complex weather. However, clouds tend to cluster on scales of around 100 km, particularly in the morning and evening. This strongly impacts the theoretical predictions of the variability, which does not include clustering. Furthermore, the mass flux per cloud closely follows an exponential distribution if all clouds are considered together and if overlapping cloud objects are separated. The nonseparated cloud mass flux distribution resembles a power law. These findings support the use of the theory for stochastic parameterizations but also highlight areas for improvement.
Abstract
The spatial scale dependence of midlatitude water vapor variability in the high-resolution limited-area model COSMO is evaluated using diagnostics of scaling behavior. Past analysis of airborne lidar measurements showed that structure function scaling exponents depend on the corresponding airmass characteristics, and that a classification of the troposphere into convective and nonconvective layers led to significantly different power-law behaviors for each of these two regimes. In particular, scaling properties in the convective air mass were characterized by rough and highly intermittent data series, whereas the nonconvective regime was dominated by smoother structures with weaker small-scale variability. This study finds similar results in a model simulation with an even more pronounced distinction between the two air masses. Quantitative scaling diagnostics agree well with measurements in the nonconvective air mass, whereas in the convective air mass the simulation shows a much higher intermittency. Sensitivity analyses were performed using the model data to assess the impact of limitations of the observational dataset, which indicate that analyses of lidar data most likely underestimated the intermittency in convective air masses due to the small samples from single flight tracks, which led to a bias when data with poor fits were rejected. Though the quantitative estimation of intermittency remains uncertain for convective air masses, the ability of the model to capture the dominant weather regime dependence of water vapor scaling properties is encouraging.
Abstract
The spatial scale dependence of midlatitude water vapor variability in the high-resolution limited-area model COSMO is evaluated using diagnostics of scaling behavior. Past analysis of airborne lidar measurements showed that structure function scaling exponents depend on the corresponding airmass characteristics, and that a classification of the troposphere into convective and nonconvective layers led to significantly different power-law behaviors for each of these two regimes. In particular, scaling properties in the convective air mass were characterized by rough and highly intermittent data series, whereas the nonconvective regime was dominated by smoother structures with weaker small-scale variability. This study finds similar results in a model simulation with an even more pronounced distinction between the two air masses. Quantitative scaling diagnostics agree well with measurements in the nonconvective air mass, whereas in the convective air mass the simulation shows a much higher intermittency. Sensitivity analyses were performed using the model data to assess the impact of limitations of the observational dataset, which indicate that analyses of lidar data most likely underestimated the intermittency in convective air masses due to the small samples from single flight tracks, which led to a bias when data with poor fits were rejected. Though the quantitative estimation of intermittency remains uncertain for convective air masses, the ability of the model to capture the dominant weather regime dependence of water vapor scaling properties is encouraging.
Abstract
Stochastic perturbations allow for the representation of small-scale variability due to unresolved physical processes. However, the properties of this variability depend on model resolution and weather regime. A physically based method is presented for introducing stochastic perturbations into kilometer-scale atmospheric models that explicitly account for these dependencies. The amplitude of the perturbations is based on information obtained from the model’s subgrid turbulence parameterization, while the spatial and temporal correlations are based on physical length and time scales of the turbulent motions. The stochastic perturbations lead to triggering of additional convective cells and improved precipitation amounts in simulations of two days with weak synoptic forcing of convection but different amounts of precipitation. The perturbations had little impact in a third case study, where precipitation was mainly associated with a cold front. In contrast, an unphysical version of the scheme with constant perturbation amplitude performed poorly since there was no perturbation amplitude that would give improved amounts of precipitation during the day without generating spurious convection at other times.
Abstract
Stochastic perturbations allow for the representation of small-scale variability due to unresolved physical processes. However, the properties of this variability depend on model resolution and weather regime. A physically based method is presented for introducing stochastic perturbations into kilometer-scale atmospheric models that explicitly account for these dependencies. The amplitude of the perturbations is based on information obtained from the model’s subgrid turbulence parameterization, while the spatial and temporal correlations are based on physical length and time scales of the turbulent motions. The stochastic perturbations lead to triggering of additional convective cells and improved precipitation amounts in simulations of two days with weak synoptic forcing of convection but different amounts of precipitation. The perturbations had little impact in a third case study, where precipitation was mainly associated with a cold front. In contrast, an unphysical version of the scheme with constant perturbation amplitude performed poorly since there was no perturbation amplitude that would give improved amounts of precipitation during the day without generating spurious convection at other times.