Browse
Abstract
This study investigates whether quasi-random surface vertical vorticity is sufficient for tornadogenesis when combined with an updraft typical of tornadic supercells. The viability of this pathway could mean that a coherent process to produce well-organized surface vertical vorticity is rather unimportant. Highly idealized simulations are used to establish random noise as a possible seed for the production of tornado-like vortices (TLVs). A number of sensitivities are then examined across the simulations. The most explanatory predictor of whether a TLV will form (and how strong it will become) is the maximal value of initial surface circulation found near the updraft. Perhaps surprisingly, sufficient circulation for tornadogenesis is often present even when the surface vertical vorticity field lacks any obvious organized structure. The other key ingredient for TLV formation is confirmed to be a large vertical gradient in vertical velocity close to the ground (to promote stretching). Overall, it appears that random surface vertical vorticity is indeed sufficient for TLV formation given adequate stretching. However, it is shown that longer-wavelength noise is more likely to be associated with substantial surface circulation (because it is the areal integral of vertical vorticity). Thus, coherent vorticity sources that produce longer-wavelength structures are likely to be the most supportive of tornadogenesis.
Abstract
This study investigates whether quasi-random surface vertical vorticity is sufficient for tornadogenesis when combined with an updraft typical of tornadic supercells. The viability of this pathway could mean that a coherent process to produce well-organized surface vertical vorticity is rather unimportant. Highly idealized simulations are used to establish random noise as a possible seed for the production of tornado-like vortices (TLVs). A number of sensitivities are then examined across the simulations. The most explanatory predictor of whether a TLV will form (and how strong it will become) is the maximal value of initial surface circulation found near the updraft. Perhaps surprisingly, sufficient circulation for tornadogenesis is often present even when the surface vertical vorticity field lacks any obvious organized structure. The other key ingredient for TLV formation is confirmed to be a large vertical gradient in vertical velocity close to the ground (to promote stretching). Overall, it appears that random surface vertical vorticity is indeed sufficient for TLV formation given adequate stretching. However, it is shown that longer-wavelength noise is more likely to be associated with substantial surface circulation (because it is the areal integral of vertical vorticity). Thus, coherent vorticity sources that produce longer-wavelength structures are likely to be the most supportive of tornadogenesis.
Abstract
The hypothesis that predictability depends on the atmospheric state in the planetary-scale low-frequency variability in boreal winter was examined. We first computed six typical weather patterns from 500-hPa geopotential height anomalies in the Northern Hemisphere using self-organizing map (SOM) and k-clustering analysis. Next, using 11 models from the subseasonal-to-seasonal (S2S) operational and reforecast archive, we computed each model’s climatology as a function of lead time to evaluate model bias. Although the forecast bias depends on the model, it is consistently the largest when the forecast begins from the atmospheric state with a blocking-like pattern in the eastern North Pacific. Moreover, the ensemble-forecast spread based on S2S multimodel forecast data was compared with empirically estimated Fokker–Planck equation (FPE) parameters based on reanalysis data. The multimodel mean ensemble-forecast spread was correlated with the diffusion tensor norm; they are large for the cases when the atmospheric state started from a cluster with a blocking-like pattern. As the multimodel mean is expected to substantially reduce model biases and may approximate the predictability inherent in nature, we can summarize that the atmospheric state corresponding to the cluster was less predictable than others.
Significance Statement
The purpose of this study is to examine the performance of week-to-month forecasts by analyzing multimodel forecast results. We established the hypothesis proposed by the previous studies that the accuracy of forecasts depended on the atmospheric state. Together with the data-based method on predictability, an atmospheric state with the anticyclone anomaly in the eastern North Pacific exhibited low predictability. Our results provide a method to foresee the ability of week-to-month forecasts.
Abstract
The hypothesis that predictability depends on the atmospheric state in the planetary-scale low-frequency variability in boreal winter was examined. We first computed six typical weather patterns from 500-hPa geopotential height anomalies in the Northern Hemisphere using self-organizing map (SOM) and k-clustering analysis. Next, using 11 models from the subseasonal-to-seasonal (S2S) operational and reforecast archive, we computed each model’s climatology as a function of lead time to evaluate model bias. Although the forecast bias depends on the model, it is consistently the largest when the forecast begins from the atmospheric state with a blocking-like pattern in the eastern North Pacific. Moreover, the ensemble-forecast spread based on S2S multimodel forecast data was compared with empirically estimated Fokker–Planck equation (FPE) parameters based on reanalysis data. The multimodel mean ensemble-forecast spread was correlated with the diffusion tensor norm; they are large for the cases when the atmospheric state started from a cluster with a blocking-like pattern. As the multimodel mean is expected to substantially reduce model biases and may approximate the predictability inherent in nature, we can summarize that the atmospheric state corresponding to the cluster was less predictable than others.
Significance Statement
The purpose of this study is to examine the performance of week-to-month forecasts by analyzing multimodel forecast results. We established the hypothesis proposed by the previous studies that the accuracy of forecasts depended on the atmospheric state. Together with the data-based method on predictability, an atmospheric state with the anticyclone anomaly in the eastern North Pacific exhibited low predictability. Our results provide a method to foresee the ability of week-to-month forecasts.
Abstract
Midlatitude storm tracks are the most prominent feature of the midlatitude climate. The equatorward boundary of the storm tracks marks the transition from the dry subtropics to the temperate midlatitudes. This boundary can be estimated as the lowest latitude of efficient baroclinic growth. Scaling theories for the lowest latitude of baroclinic growth were previously suggested based on the domain-averaged parameters of the Eady growth rate and supercriticality. In this study, a new estimate for the lowest latitude of baroclinic growth is proposed, based on the assumption that baroclinic growth is limited by the vertical scale of eddy fluxes. An equation for the eddy displacement flux is obtained from which the vertical scale is calculated, given the zonal-mean zonal wind and temperature profiles. It is found that the vertical scale of the eddy displacement flux and the observed baroclinic conversion rate decrease rapidly toward the equator around the same latitude. The seasonal cycle of the lowest latitude of baroclinic growth, calculated from the observed baroclinic conversion rate, is compared with the theoretical estimates. The estimates based on the vertical scale of the eddy displacement flux and supercriticality agree well with the observed lowest latitude of baroclinic growth. In contrast, the estimate based on the Eady growth rate is located around 10°–15° equatorward. The estimate of the lowest latitude of baroclinic growth may be used in future studies for explaining variations in the properties of the storm track, the Hadley cell edge, and the subtropical jet.
Significance Statement
The lowest latitude of baroclinic growth marks the transition from the dry and stable subtropics to the moist and variable midlatitudes. Estimating this latitude based on mean-flow variables can potentially advance the theoretical understanding of the latitudinal structure of the atmospheric circulation around the subtropics and midlatitudes. This study suggests a new method for estimating the lowest latitude of baroclinic growth, which is found to predict the observed lowest latitude of baroclinic energy conversion relatively well, compared with the traditional prediction based on the Eady growth rate.
Abstract
Midlatitude storm tracks are the most prominent feature of the midlatitude climate. The equatorward boundary of the storm tracks marks the transition from the dry subtropics to the temperate midlatitudes. This boundary can be estimated as the lowest latitude of efficient baroclinic growth. Scaling theories for the lowest latitude of baroclinic growth were previously suggested based on the domain-averaged parameters of the Eady growth rate and supercriticality. In this study, a new estimate for the lowest latitude of baroclinic growth is proposed, based on the assumption that baroclinic growth is limited by the vertical scale of eddy fluxes. An equation for the eddy displacement flux is obtained from which the vertical scale is calculated, given the zonal-mean zonal wind and temperature profiles. It is found that the vertical scale of the eddy displacement flux and the observed baroclinic conversion rate decrease rapidly toward the equator around the same latitude. The seasonal cycle of the lowest latitude of baroclinic growth, calculated from the observed baroclinic conversion rate, is compared with the theoretical estimates. The estimates based on the vertical scale of the eddy displacement flux and supercriticality agree well with the observed lowest latitude of baroclinic growth. In contrast, the estimate based on the Eady growth rate is located around 10°–15° equatorward. The estimate of the lowest latitude of baroclinic growth may be used in future studies for explaining variations in the properties of the storm track, the Hadley cell edge, and the subtropical jet.
Significance Statement
The lowest latitude of baroclinic growth marks the transition from the dry and stable subtropics to the moist and variable midlatitudes. Estimating this latitude based on mean-flow variables can potentially advance the theoretical understanding of the latitudinal structure of the atmospheric circulation around the subtropics and midlatitudes. This study suggests a new method for estimating the lowest latitude of baroclinic growth, which is found to predict the observed lowest latitude of baroclinic energy conversion relatively well, compared with the traditional prediction based on the Eady growth rate.
Abstract
It is recognized that the atmosphere’s predictability is intrinsically limited by unobservably small uncertainties that are beyond our capability to eliminate. However, there have been discussions in recent years on whether forecast error grows upscale (small-scale error grows faster and transfers to progressively larger scales) or up-amplitude (grows at all scales at the same time) when unobservably small-amplitude initial uncertainties are imposed at the large scales and limit the intrinsic predictability. This study uses large-scale small-amplitude initial uncertainties of two different structures—one idealized, univariate, and isotropic, the other realistic, multivariate, and flow dependent—to examine the error growth characteristics in the intrinsic predictability regime associated with a record-breaking rainfall event that happened on 19–20 July 2021 in China. Results indicate upscale error growth characteristics regardless of the structure of the initial uncertainties: the errors at smaller scales grow fastest first; as the forecasts continue, the wavelengths of the fastest error growth gradually shift toward larger scales with reduced error growth rates. Therefore, error growth from smaller to larger scales was more important than the growth directly at the large scales of the initial errors. These upscale error growth characteristics also depend on the perturbed and examined quantities: if the examined quantity is perturbed, then its errors grow upscale; if there is no initial uncertainty in the examined quantity, then its errors grow at all scales at the same time, although its smaller-scale errors still grow faster for the first several hours, suggesting the existence of the upscale error growth.
Significance Statement
This study compared the error growth characteristics associated with the atmosphere’s intrinsic predictability under two different structures of unobservably small-amplitude, large-scale initial uncertainties: one idealized, univariate, and isotropic, the other realistic, multivariate, and flow dependent. The characteristics of the errors growing upscale rather than up-amplitude regardless of the initial uncertainties’ structure are apparent. The large-scale errors do not grow if their initial amplitudes are much bigger than the small-scale errors. This study also examined how the error growth characteristics will change when the quantity that is used to describe the error growth is inconsistent with the quantity that contains uncertainty, suggesting the importance of including multivariate, covariant uncertainties of state variables in atmospheric predictability studies.
Abstract
It is recognized that the atmosphere’s predictability is intrinsically limited by unobservably small uncertainties that are beyond our capability to eliminate. However, there have been discussions in recent years on whether forecast error grows upscale (small-scale error grows faster and transfers to progressively larger scales) or up-amplitude (grows at all scales at the same time) when unobservably small-amplitude initial uncertainties are imposed at the large scales and limit the intrinsic predictability. This study uses large-scale small-amplitude initial uncertainties of two different structures—one idealized, univariate, and isotropic, the other realistic, multivariate, and flow dependent—to examine the error growth characteristics in the intrinsic predictability regime associated with a record-breaking rainfall event that happened on 19–20 July 2021 in China. Results indicate upscale error growth characteristics regardless of the structure of the initial uncertainties: the errors at smaller scales grow fastest first; as the forecasts continue, the wavelengths of the fastest error growth gradually shift toward larger scales with reduced error growth rates. Therefore, error growth from smaller to larger scales was more important than the growth directly at the large scales of the initial errors. These upscale error growth characteristics also depend on the perturbed and examined quantities: if the examined quantity is perturbed, then its errors grow upscale; if there is no initial uncertainty in the examined quantity, then its errors grow at all scales at the same time, although its smaller-scale errors still grow faster for the first several hours, suggesting the existence of the upscale error growth.
Significance Statement
This study compared the error growth characteristics associated with the atmosphere’s intrinsic predictability under two different structures of unobservably small-amplitude, large-scale initial uncertainties: one idealized, univariate, and isotropic, the other realistic, multivariate, and flow dependent. The characteristics of the errors growing upscale rather than up-amplitude regardless of the initial uncertainties’ structure are apparent. The large-scale errors do not grow if their initial amplitudes are much bigger than the small-scale errors. This study also examined how the error growth characteristics will change when the quantity that is used to describe the error growth is inconsistent with the quantity that contains uncertainty, suggesting the importance of including multivariate, covariant uncertainties of state variables in atmospheric predictability studies.
Abstract
We performed a detailed analysis of ground-based data to investigate changes in the morphological properties and particle size distribution of precipitation particles as they fall through the melting layer (ML). In July 2013, we started continuous precipitation monitoring in Sapporo (Japan) with a two-dimensional video disdrometer, an electrical balance–type snow gauge, and an X-band marine radar. We used data collected from 0943 to 1040 Japan standard time (JST) 10 March 2015 for analysis, when the bright band progressively descended to the ground surface and precipitation intensity was moderate and approximately steady (∼10 mm h−1). We found that the aggregation of aggregates in the upper half of the ML did not necessarily result in large raindrops. Almost all of the snow particles with a melted diameter (Dm ) ≥ 4 mm broke up before they melted into raindrops of equivalent size. The apparent one-to-one relationship between melting snow particles and raindrops held for particles with 2 < Dm < 3 mm. Most small raindrops were generated by the successive breakup of melting particles in the lower half of the ML.
Abstract
We performed a detailed analysis of ground-based data to investigate changes in the morphological properties and particle size distribution of precipitation particles as they fall through the melting layer (ML). In July 2013, we started continuous precipitation monitoring in Sapporo (Japan) with a two-dimensional video disdrometer, an electrical balance–type snow gauge, and an X-band marine radar. We used data collected from 0943 to 1040 Japan standard time (JST) 10 March 2015 for analysis, when the bright band progressively descended to the ground surface and precipitation intensity was moderate and approximately steady (∼10 mm h−1). We found that the aggregation of aggregates in the upper half of the ML did not necessarily result in large raindrops. Almost all of the snow particles with a melted diameter (Dm ) ≥ 4 mm broke up before they melted into raindrops of equivalent size. The apparent one-to-one relationship between melting snow particles and raindrops held for particles with 2 < Dm < 3 mm. Most small raindrops were generated by the successive breakup of melting particles in the lower half of the ML.
Abstract
The authors explore the dynamical origins of rotation of a mature tornado-like vortex (TLV) using an idealized numerical simulation of a supercell thunderstorm. Using 30-min forward parcel trajectories that terminate at the base of the TLV, the vorticity dynamics are analyzed for n = 7 parcels. Aside from the integration of the individual terms of the traditional vorticity equation, an alternative formulation of the vorticity equation and its integral, here referred to as vorticity source decomposition, is employed. This formulation is derived on the basis of Truesdell’s “basic vorticity formula,” which is obtained by first formulating the vorticity in material (Lagrangian) coordinates, and then obtaining the components relative to the fixed spatial (Eulerian) basis by applying the vector transformation rule. The analysis highlights surface drag as the most reliable vorticity source for the rotation at the base of the vortex for the analyzed parcels. Moreover, the vorticity source decomposition exposes the importance of small amounts of vorticity produced baroclinically, which may become significant after sufficient stretching occurs. Further, it is shown that ambient vorticity, upon being rearranged as the trajectories pass through the storm, may for some parcels directly contribute to the rotation of the TLV. Finally, the role of diffusion is addressed using analytical solutions of the steady Burgers–Rott vortex, suggesting that diffusion cannot aid in maintaining the vortex core.
Abstract
The authors explore the dynamical origins of rotation of a mature tornado-like vortex (TLV) using an idealized numerical simulation of a supercell thunderstorm. Using 30-min forward parcel trajectories that terminate at the base of the TLV, the vorticity dynamics are analyzed for n = 7 parcels. Aside from the integration of the individual terms of the traditional vorticity equation, an alternative formulation of the vorticity equation and its integral, here referred to as vorticity source decomposition, is employed. This formulation is derived on the basis of Truesdell’s “basic vorticity formula,” which is obtained by first formulating the vorticity in material (Lagrangian) coordinates, and then obtaining the components relative to the fixed spatial (Eulerian) basis by applying the vector transformation rule. The analysis highlights surface drag as the most reliable vorticity source for the rotation at the base of the vortex for the analyzed parcels. Moreover, the vorticity source decomposition exposes the importance of small amounts of vorticity produced baroclinically, which may become significant after sufficient stretching occurs. Further, it is shown that ambient vorticity, upon being rearranged as the trajectories pass through the storm, may for some parcels directly contribute to the rotation of the TLV. Finally, the role of diffusion is addressed using analytical solutions of the steady Burgers–Rott vortex, suggesting that diffusion cannot aid in maintaining the vortex core.
Abstract
The Kinematic Driver-Aerosol (KiD-A) intercomparison was established to test the hypothesis that detailed warm microphysical schemes provide a benchmark for lower-complexity bulk microphysics schemes. KiD-A is the first intercomparison to compare multiple Lagrangian cloud models (LCMs), size bin-resolved schemes, and double-moment bulk microphysics schemes in a consistent 1D dynamic framework and box cases. In the absence of sedimentation and collision–coalescence, the drop size distributions (DSDs) from the LCMs exhibit similar evolution with expected physical behaviors and good interscheme agreement, with the volume mean diameter (D vol) from the LCMs within 1%–5% of each other. In contrast, the bin schemes exhibit nonphysical broadening with condensational growth. These results further strengthen the case that LCMs are an appropriate numerical benchmark for DSD evolution under condensational growth. When precipitation processes are included, however, the simulated liquid water path, precipitation rates, and response to modified cloud drop/aerosol number concentrations from the LCMs vary substantially, while the bin and bulk schemes are relatively more consistent with each other. The lack of consistency in the LCM results stems from both the collision–coalescence process and the sedimentation process, limiting their application as a numerical benchmark for precipitation processes. Reassuringly, however, precipitation from bulk schemes, which are the basis for cloud microphysics in weather and climate prediction, is within the spread of precipitation from the detailed schemes (LCMs and bin). Overall, this intercomparison identifies the need for focused effort on the comparison of collision–coalescence methods and sedimentation in detailed microphysics schemes, especially LCMs.
Abstract
The Kinematic Driver-Aerosol (KiD-A) intercomparison was established to test the hypothesis that detailed warm microphysical schemes provide a benchmark for lower-complexity bulk microphysics schemes. KiD-A is the first intercomparison to compare multiple Lagrangian cloud models (LCMs), size bin-resolved schemes, and double-moment bulk microphysics schemes in a consistent 1D dynamic framework and box cases. In the absence of sedimentation and collision–coalescence, the drop size distributions (DSDs) from the LCMs exhibit similar evolution with expected physical behaviors and good interscheme agreement, with the volume mean diameter (D vol) from the LCMs within 1%–5% of each other. In contrast, the bin schemes exhibit nonphysical broadening with condensational growth. These results further strengthen the case that LCMs are an appropriate numerical benchmark for DSD evolution under condensational growth. When precipitation processes are included, however, the simulated liquid water path, precipitation rates, and response to modified cloud drop/aerosol number concentrations from the LCMs vary substantially, while the bin and bulk schemes are relatively more consistent with each other. The lack of consistency in the LCM results stems from both the collision–coalescence process and the sedimentation process, limiting their application as a numerical benchmark for precipitation processes. Reassuringly, however, precipitation from bulk schemes, which are the basis for cloud microphysics in weather and climate prediction, is within the spread of precipitation from the detailed schemes (LCMs and bin). Overall, this intercomparison identifies the need for focused effort on the comparison of collision–coalescence methods and sedimentation in detailed microphysics schemes, especially LCMs.
Abstract
A study of the vertical structure of postfrontal shallow clouds in the marine boundary layer over the Southern Ocean is presented. The central question of this two-part study regards cloud phase (liquid/ice) of precipitation, and the associated growth mechanisms. In this first part, data from the Measurements of Aerosols, Radiation, and Clouds over the Southern Ocean (MARCUS) field campaign are analyzed, starting with a 75-h case with continuous sea surface-based thermal instability, modest surface heat fluxes, an open-cellular mesoscale organization, and very few ice nucleating particles (INPs). The clouds are mostly precipitating and shallow (tops mostly around 2 km above sea level), with weak up- and downdrafts, and with cloud-top temperatures generally around −18° to −10°C. The case study is extended to three other periods of postfrontal shallow clouds in MARCUS. While abundant supercooled liquid water is commonly present, an experimental cloud-phase algorithm classifies nearly two-thirds of clouds in the 0° to −5°C layer as containing ice (cloud ice, snow, or mixed phase), implying that much of the precipitation grows through cold-cloud processes. The best predictors of ice presence are cloud-top temperature, cloud depth, and INP concentration. Measures of convective activity and turbulence are found to be poor indicators of ice presence in the studied environment. The water-phase distribution in this cloud regime is explored through numerical simulations in Part II.
Significance Statement
Climate models generally predict a lower albedo than observed over the Southern Ocean, and this is largely attributed to a lack of cloudiness, especially in the postfrontal cold sector of midlatitude cyclones. This in turn may be due to an excess of ice in these simulated clouds, resulting in rapid precipitation fallout and an overly brief cloud lifespan. The objective of this study is to examine whether shallow postfrontal clouds over the Southern Ocean are dominated by supercooled drops, or by snow and ice, using data collected by a U.S. Department of Energy Atmospheric Radiation Measurement Mobile Facility deployed aboard an Australian Antarctic supply vessel. We find that these clouds contain much supercooled liquid, even though cloud-top temperatures generally are around −18° to −8°C, and that about two-thirds of the clouds just above the freezing level contain ice. Much of the precipitation appears to grow through cold-cloud processes above the freezing level, rather than drizzle/rain. Updrafts and/or turbulence in convection or in cloud-top generating cells do not initiate much ice, compared to observations elsewhere in a similar temperature range. This may be attributable to the extremely low concentration of ice nucleating particles in this environment. Ultimately, the deepest clouds with the coldest cloud tops are most likely to be ice dominated.
Abstract
A study of the vertical structure of postfrontal shallow clouds in the marine boundary layer over the Southern Ocean is presented. The central question of this two-part study regards cloud phase (liquid/ice) of precipitation, and the associated growth mechanisms. In this first part, data from the Measurements of Aerosols, Radiation, and Clouds over the Southern Ocean (MARCUS) field campaign are analyzed, starting with a 75-h case with continuous sea surface-based thermal instability, modest surface heat fluxes, an open-cellular mesoscale organization, and very few ice nucleating particles (INPs). The clouds are mostly precipitating and shallow (tops mostly around 2 km above sea level), with weak up- and downdrafts, and with cloud-top temperatures generally around −18° to −10°C. The case study is extended to three other periods of postfrontal shallow clouds in MARCUS. While abundant supercooled liquid water is commonly present, an experimental cloud-phase algorithm classifies nearly two-thirds of clouds in the 0° to −5°C layer as containing ice (cloud ice, snow, or mixed phase), implying that much of the precipitation grows through cold-cloud processes. The best predictors of ice presence are cloud-top temperature, cloud depth, and INP concentration. Measures of convective activity and turbulence are found to be poor indicators of ice presence in the studied environment. The water-phase distribution in this cloud regime is explored through numerical simulations in Part II.
Significance Statement
Climate models generally predict a lower albedo than observed over the Southern Ocean, and this is largely attributed to a lack of cloudiness, especially in the postfrontal cold sector of midlatitude cyclones. This in turn may be due to an excess of ice in these simulated clouds, resulting in rapid precipitation fallout and an overly brief cloud lifespan. The objective of this study is to examine whether shallow postfrontal clouds over the Southern Ocean are dominated by supercooled drops, or by snow and ice, using data collected by a U.S. Department of Energy Atmospheric Radiation Measurement Mobile Facility deployed aboard an Australian Antarctic supply vessel. We find that these clouds contain much supercooled liquid, even though cloud-top temperatures generally are around −18° to −8°C, and that about two-thirds of the clouds just above the freezing level contain ice. Much of the precipitation appears to grow through cold-cloud processes above the freezing level, rather than drizzle/rain. Updrafts and/or turbulence in convection or in cloud-top generating cells do not initiate much ice, compared to observations elsewhere in a similar temperature range. This may be attributable to the extremely low concentration of ice nucleating particles in this environment. Ultimately, the deepest clouds with the coldest cloud tops are most likely to be ice dominated.
Abstract
Part I of this series presented a detailed overview of postfrontal mixed-phase clouds observed during the Measurements of Aerosols, Radiation, and Clouds over the Southern Ocean (MARCUS) field campaign. In Part II, we focus on a multiday (23–26 February 2018) case with the aim of understanding ice production as well as model sensitivity to ice process parameterizations using the Weather Research and Forecasting (WRF) Model. The control simulation with the Predicted Particle Properties (P3) microphysics scheme underestimates the ice content and overestimates the supercooled liquid water content, contrary to the bias common in global climate models. The simulations targeted at ice production processes show negligible sensitivity to cloud droplet number concentrations. Further, neither increasing ice nuclei particle (INP) concentrations to an unrealistic level nor adjusting it to MARCUS field estimations alone guarantees more ice production in the model. However, the simulated clouds are found to be highly sensitive to the implementation of immersion freezing, the thresholding of condensation/deposition freezing initiation, and the rime splintering process. By increasing immersion freezing of cloud droplets, relaxing thresholds for condensation/deposition freezing, or removing rime splintering thresholds, the model significantly improves its performance in producing ice. The relaxation of the immersion freezing temperature threshold to the observed cloud-top temperature suggests an in-cloud seeder–feeder mechanism. The results of this work call for an increase in observations of INP, especially over the remote Southern Ocean and at relatively high temperatures, and measurements of ice particle size distributions to better constrain ice nucleating processes in models.
Abstract
Part I of this series presented a detailed overview of postfrontal mixed-phase clouds observed during the Measurements of Aerosols, Radiation, and Clouds over the Southern Ocean (MARCUS) field campaign. In Part II, we focus on a multiday (23–26 February 2018) case with the aim of understanding ice production as well as model sensitivity to ice process parameterizations using the Weather Research and Forecasting (WRF) Model. The control simulation with the Predicted Particle Properties (P3) microphysics scheme underestimates the ice content and overestimates the supercooled liquid water content, contrary to the bias common in global climate models. The simulations targeted at ice production processes show negligible sensitivity to cloud droplet number concentrations. Further, neither increasing ice nuclei particle (INP) concentrations to an unrealistic level nor adjusting it to MARCUS field estimations alone guarantees more ice production in the model. However, the simulated clouds are found to be highly sensitive to the implementation of immersion freezing, the thresholding of condensation/deposition freezing initiation, and the rime splintering process. By increasing immersion freezing of cloud droplets, relaxing thresholds for condensation/deposition freezing, or removing rime splintering thresholds, the model significantly improves its performance in producing ice. The relaxation of the immersion freezing temperature threshold to the observed cloud-top temperature suggests an in-cloud seeder–feeder mechanism. The results of this work call for an increase in observations of INP, especially over the remote Southern Ocean and at relatively high temperatures, and measurements of ice particle size distributions to better constrain ice nucleating processes in models.
Abstract
In this work, cloud ensemble statistics are extracted from idealized radiative–convective equilibrium simulations performed at horizontal grid spacings Δ ranging from 2 km to 125 m. At the coarsest resolution, convection remains randomly distributed in space such that the equilibrium statistical mechanics theory proposed by Craig and Cohen in 2006 (CC06; assumes Poisson distributed clouds and exponential mass flux distributions) remains valid. Using classical organization metrics, clustering is already observed at Δ = 1 km, but substantial deviations between the simulated cloud ensemble statistics and CC06 are only observed for grid spacings Δ < 500 m. At these resolutions, the cloud mass flux distributions exhibit heavy tails and cloud counts become overdispersed (higher variance than a Poisson distribution). These changes in ensemble statistics are accompanied by a shift in subcloud organization patterns as well as with the fact that individual cloudy updrafts start to be resolved. Consequently, a horizontal grid spacing no larger than 250 m is recommended, not only to properly resolve the dynamics of individual convective clouds, but also to capture the mesoscale organization of the cloud ensemble. Finally, it is shown that the CC06 theory and our high-resolution results including mesoscale organization may be reconciled if one considers 1) areas smaller than approximately 2 km in size, corresponding roughly to the narrow bands along which clouds develop almost randomly; and 2) individual cloud cores instead of cloud objects, core mass fluxes being shown to generally follow exponential distributions.
Abstract
In this work, cloud ensemble statistics are extracted from idealized radiative–convective equilibrium simulations performed at horizontal grid spacings Δ ranging from 2 km to 125 m. At the coarsest resolution, convection remains randomly distributed in space such that the equilibrium statistical mechanics theory proposed by Craig and Cohen in 2006 (CC06; assumes Poisson distributed clouds and exponential mass flux distributions) remains valid. Using classical organization metrics, clustering is already observed at Δ = 1 km, but substantial deviations between the simulated cloud ensemble statistics and CC06 are only observed for grid spacings Δ < 500 m. At these resolutions, the cloud mass flux distributions exhibit heavy tails and cloud counts become overdispersed (higher variance than a Poisson distribution). These changes in ensemble statistics are accompanied by a shift in subcloud organization patterns as well as with the fact that individual cloudy updrafts start to be resolved. Consequently, a horizontal grid spacing no larger than 250 m is recommended, not only to properly resolve the dynamics of individual convective clouds, but also to capture the mesoscale organization of the cloud ensemble. Finally, it is shown that the CC06 theory and our high-resolution results including mesoscale organization may be reconciled if one considers 1) areas smaller than approximately 2 km in size, corresponding roughly to the narrow bands along which clouds develop almost randomly; and 2) individual cloud cores instead of cloud objects, core mass fluxes being shown to generally follow exponential distributions.