Browse
Abstract
This study focuses on the application of two standard inflow turbulence generation methods for growing convective boundary layer (CBL) simulations: the recycle–rescale (R-R) and the digital filter–based (DF) methods, which are used in computational fluid dynamics. The primary objective of this study is to expand the applicability of the R-R method to simulations of thermally driven CBLs. This method is called the extended R-R method. However, in previous studies, the DF method has been extended to generate potential temperature perturbations. This study investigated whether the extended DF method can be applied to simulations of growing thermally driven CBLs. In this study, idealized simulations of growing thermally driven CBLs using the extended R-R and DF methods were performed. The results showed that both extended methods could capture the characteristics of thermally driven CBLs. The extended R-R method reproduced turbulence in thermally driven CBLs better than the extended DF method in the spectrum and histogram of vertical wind speed. However, the height of the thermally driven CBL was underestimated in about 100 m compared with the extended DF method. Sensitivity experiments were conducted on the parameters used in the extended DF and R-R methods. The results showed that underestimation of the length scale in the extended DF method causes a shortage of large-scale turbulence components. The other point suggested by the results of the sensitivity experiments is that the length of the driver region in the extended R-R method should be sufficient to reproduce the spanwise movement of the roll vortices.
Significance Statement
Inflow turbulence generation methods for large-eddy simulation (LES) models are crucial for the better downscaling of meteorological mesoscale models (RANS models) to microscale models (LES models). Various CFD methods have been developed, but few have been applied to simulations of thermally driven convective boundary layers (CBLs). To address this problem, we focused on a method that recycles turbulence [the recycle–rescale (R-R) method] and another method that synthetically generates turbulence [the digital filter–based (DF) method]. This study extends the R-R method to manage turbulence in thermally driven CBLs. In addition, this study investigated the applicability of the DF method to thermally driven CBL simulations. Both extended methods are effective for downscaling experiments and capture the characteristics of thermally driven CBLs.
Abstract
This study focuses on the application of two standard inflow turbulence generation methods for growing convective boundary layer (CBL) simulations: the recycle–rescale (R-R) and the digital filter–based (DF) methods, which are used in computational fluid dynamics. The primary objective of this study is to expand the applicability of the R-R method to simulations of thermally driven CBLs. This method is called the extended R-R method. However, in previous studies, the DF method has been extended to generate potential temperature perturbations. This study investigated whether the extended DF method can be applied to simulations of growing thermally driven CBLs. In this study, idealized simulations of growing thermally driven CBLs using the extended R-R and DF methods were performed. The results showed that both extended methods could capture the characteristics of thermally driven CBLs. The extended R-R method reproduced turbulence in thermally driven CBLs better than the extended DF method in the spectrum and histogram of vertical wind speed. However, the height of the thermally driven CBL was underestimated in about 100 m compared with the extended DF method. Sensitivity experiments were conducted on the parameters used in the extended DF and R-R methods. The results showed that underestimation of the length scale in the extended DF method causes a shortage of large-scale turbulence components. The other point suggested by the results of the sensitivity experiments is that the length of the driver region in the extended R-R method should be sufficient to reproduce the spanwise movement of the roll vortices.
Significance Statement
Inflow turbulence generation methods for large-eddy simulation (LES) models are crucial for the better downscaling of meteorological mesoscale models (RANS models) to microscale models (LES models). Various CFD methods have been developed, but few have been applied to simulations of thermally driven convective boundary layers (CBLs). To address this problem, we focused on a method that recycles turbulence [the recycle–rescale (R-R) method] and another method that synthetically generates turbulence [the digital filter–based (DF) method]. This study extends the R-R method to manage turbulence in thermally driven CBLs. In addition, this study investigated the applicability of the DF method to thermally driven CBL simulations. Both extended methods are effective for downscaling experiments and capture the characteristics of thermally driven CBLs.
Abstract
This study investigates the impacts of grid spacing and station network on surface analyses and forecasts including temperature, humidity, and winds in Beijing Winter Olympic complex terrain. The high-resolution analyses are generated by a rapid-refresh integrated system that includes a topographic downscaling procedure. Results show that surface analyses are more accurate with a higher targeted grid spacing. In particular, the average analysis errors of surface temperature, humidity, and winds are all significantly reduced when the grid size is increased. This improvement is mainly attributed to a more realistic simulation of the topographic effects in the integrated system because the topographic downscaling at higher grid spacing can add more details in a complex mountain region. From 1 km to 100 m, 1–12-h forecasts of temperature and humidity are also largely improved, while the wind only shows a slight improvement for 1–6-h forecasts. The influence of station network on the surface analyses is further examined. Results show that the spatial distributions of temperature and humidity at a 100-m space scale are more realistic and accurate when adding an intensive automatic weather station network, as more observational information can be absorbed. The adding of a station network can also reduce forecast errors, which can last for about 6 h. However, although surface winds display better analysis skill when more stations are added, the wind at the mountaintop region sometimes encounters a marginally worse effect for both analysis and forecast. The results are helpful to improve the analysis and forecast products in complex terrain and have some implications for downscaling from a coarse grid size to a finer grid.
Abstract
This study investigates the impacts of grid spacing and station network on surface analyses and forecasts including temperature, humidity, and winds in Beijing Winter Olympic complex terrain. The high-resolution analyses are generated by a rapid-refresh integrated system that includes a topographic downscaling procedure. Results show that surface analyses are more accurate with a higher targeted grid spacing. In particular, the average analysis errors of surface temperature, humidity, and winds are all significantly reduced when the grid size is increased. This improvement is mainly attributed to a more realistic simulation of the topographic effects in the integrated system because the topographic downscaling at higher grid spacing can add more details in a complex mountain region. From 1 km to 100 m, 1–12-h forecasts of temperature and humidity are also largely improved, while the wind only shows a slight improvement for 1–6-h forecasts. The influence of station network on the surface analyses is further examined. Results show that the spatial distributions of temperature and humidity at a 100-m space scale are more realistic and accurate when adding an intensive automatic weather station network, as more observational information can be absorbed. The adding of a station network can also reduce forecast errors, which can last for about 6 h. However, although surface winds display better analysis skill when more stations are added, the wind at the mountaintop region sometimes encounters a marginally worse effect for both analysis and forecast. The results are helpful to improve the analysis and forecast products in complex terrain and have some implications for downscaling from a coarse grid size to a finer grid.
Abstract
Increased flash drought awareness in recent years has motivated the development of numerous indicators for monitoring, early warning, and assessment. The flash drought indicators can act as a complementary set of tools by which to inform flash drought response and management. However, the limitations of each indicator much be measured and communicated between research and practitioners to ensure effectiveness. The limitations of any flash drought indicator are better understood and overcome through assessment of indicator sensitivity and consistency; however, such assessment cannot assume any single indicator properly represents the flash drought “truth.” To better understand the current state of flash drought monitoring, this study presents an intercomparison of nine, widely used flash drought indicators. The indicators represent perspectives and processes that are known to drive flash drought, including evapotranspiration and evaporative demand, precipitation, and soil moisture. We find no single flash drought indicator consistently outperforms all others across the contiguous United States. We do find the evaporative demand- and evapotranspiration-driven indicators tend to lead precipitation- and soil moisture-based indicators in flash drought onset, but also tend to produce more flash drought events collectively. Overall, the regional and definition-specific variability in results supports the argument for a multi-indicator approach for flash drought monitoring, as advocated by recent studies. Furthermore, flash drought research—especially evaluation of historical and potential future changes in flash drought characteristics—should test multiple indicators, datasets, and methods for representing flash drought, and ideally employ a multi-indicator analysis framework over use of a single indicator from which to infer all flash drought information.
Significance Statement
Rapid onset or “flash” drought has been an increasing concern globally, with quickly intensifying impacts to agriculture, ecosystems, and water resources. Many tools and indicators have been developed to monitor and provide early warning for flash drought, ideally resulting in more time for effective mitigation and reduced impacts. However, there remains no widely accepted single method for defining, monitoring, and measuring flash drought, which means most indicators that are developed are compared with other individual indicators or conditions and impacts in one or two flash drought events. In this study, we measure the state of flash drought monitoring through an intercomparison of nine, widely used flash drought indicators that represent different aspects of flash drought. We find that no single flash drought indicator outperformed all others and suggest that a comprehensive flash drought monitor should leverage multiple, complementary indicators, datasets, and methods. Furthermore, we suggest flash drought research—especially that which reflects on historical or projected changes in flash drought characteristics—should seek multiple indicators, datasets, and methods for analyses, thereby reducing the potentially confounding effects of sensitivity to a single indicator.
Abstract
Increased flash drought awareness in recent years has motivated the development of numerous indicators for monitoring, early warning, and assessment. The flash drought indicators can act as a complementary set of tools by which to inform flash drought response and management. However, the limitations of each indicator much be measured and communicated between research and practitioners to ensure effectiveness. The limitations of any flash drought indicator are better understood and overcome through assessment of indicator sensitivity and consistency; however, such assessment cannot assume any single indicator properly represents the flash drought “truth.” To better understand the current state of flash drought monitoring, this study presents an intercomparison of nine, widely used flash drought indicators. The indicators represent perspectives and processes that are known to drive flash drought, including evapotranspiration and evaporative demand, precipitation, and soil moisture. We find no single flash drought indicator consistently outperforms all others across the contiguous United States. We do find the evaporative demand- and evapotranspiration-driven indicators tend to lead precipitation- and soil moisture-based indicators in flash drought onset, but also tend to produce more flash drought events collectively. Overall, the regional and definition-specific variability in results supports the argument for a multi-indicator approach for flash drought monitoring, as advocated by recent studies. Furthermore, flash drought research—especially evaluation of historical and potential future changes in flash drought characteristics—should test multiple indicators, datasets, and methods for representing flash drought, and ideally employ a multi-indicator analysis framework over use of a single indicator from which to infer all flash drought information.
Significance Statement
Rapid onset or “flash” drought has been an increasing concern globally, with quickly intensifying impacts to agriculture, ecosystems, and water resources. Many tools and indicators have been developed to monitor and provide early warning for flash drought, ideally resulting in more time for effective mitigation and reduced impacts. However, there remains no widely accepted single method for defining, monitoring, and measuring flash drought, which means most indicators that are developed are compared with other individual indicators or conditions and impacts in one or two flash drought events. In this study, we measure the state of flash drought monitoring through an intercomparison of nine, widely used flash drought indicators that represent different aspects of flash drought. We find that no single flash drought indicator outperformed all others and suggest that a comprehensive flash drought monitor should leverage multiple, complementary indicators, datasets, and methods. Furthermore, we suggest flash drought research—especially that which reflects on historical or projected changes in flash drought characteristics—should seek multiple indicators, datasets, and methods for analyses, thereby reducing the potentially confounding effects of sensitivity to a single indicator.
Abstract
The use of radial velocity information from the European weather radar network is a challenging task, because of a heterogeneous radar network and the different ways of providing the Doppler velocity information. Preprocessing is therefore needed to harmonize the data. Radar observations consist of a very high resolution dataset, which means that it is both demanding to process as well as that the inherent resolution is much higher than the model resolution. One way of reducing the number of data is to create “super observations” (SO) by averaging observations in a predefined area. This paper describes the preprocessing necessary to use radar radial velocities in the data assimilation where the SO construction is included. Our main focus is to optimize the use of radial velocities in the HARMONIE–AROME numerical weather model. Several experiments were run to find the best settings for first-guess check limits as well as a tuning of the observation error value. The optimal size of the SO and the corresponding thinning distance for radar radial velocities was also studied. It was found that the radial velocity information and the reflectivity from weather radars can be treated differently when it comes to the size of the SO and the thinning. A positive impact was found when adding the velocities together with the reflectivity using the same SO size and thinning distance, but the best results were found when the SO and thinning distance for the radial velocities are smaller than the corresponding values for reflectivity.
Abstract
The use of radial velocity information from the European weather radar network is a challenging task, because of a heterogeneous radar network and the different ways of providing the Doppler velocity information. Preprocessing is therefore needed to harmonize the data. Radar observations consist of a very high resolution dataset, which means that it is both demanding to process as well as that the inherent resolution is much higher than the model resolution. One way of reducing the number of data is to create “super observations” (SO) by averaging observations in a predefined area. This paper describes the preprocessing necessary to use radar radial velocities in the data assimilation where the SO construction is included. Our main focus is to optimize the use of radial velocities in the HARMONIE–AROME numerical weather model. Several experiments were run to find the best settings for first-guess check limits as well as a tuning of the observation error value. The optimal size of the SO and the corresponding thinning distance for radar radial velocities was also studied. It was found that the radial velocity information and the reflectivity from weather radars can be treated differently when it comes to the size of the SO and the thinning. A positive impact was found when adding the velocities together with the reflectivity using the same SO size and thinning distance, but the best results were found when the SO and thinning distance for the radial velocities are smaller than the corresponding values for reflectivity.
Abstract
Prior research evaluating snowfall conditions and temporal trends in the United States often acknowledges the role of various synoptic-scale weather systems in governing snowfall variability. While synoptic classifications have been performed in other regions of North America in applications to snowfall, there remains a need for enhanced understanding of the atmospheric mechanisms of snowfall in the central United States. Here we conduct a novel synoptic climatological investigation of the weather systems responsible for snowfall in the central United States from 1948 to 2021 focused on their identification and the quantification of associated snowfall totals and events. Ten unique synoptic weather types (SWTs) were identified, each resulting in distinct regions of enhanced snowfall across the study domain aligning with regions of sufficiently cold air temperatures and forcing mechanisms. While a substantial proportion of seasonal snowfall is attributed to SWTs associated with surface troughs and/or midlatitude cyclones, in portions of the southeastern and western study domain, as much as 70% of seasonal snowfall occurs during systems with high pressure centers as the domain’s synoptic-scale forcing. Easterly flow, potentially resulting in topographic uplift from high pressure to east of the domain, was associated with between 15% and 25% of seasonal snowfall in Nebraska and South Dakota. On average, 64.8% of the SWT occurrences resulted in snowfall within the study region, ranging between 40.1% and 93.5% by SWT. Synoptic climatological investigations provide valuable insights into the unique weather systems that generate hydroclimatic variability.
Significance Statement
By evaluating the weather patterns that are responsible for snowfall in the central United States, key insights can be gained into how and why snowfall varies and potentially changes over space and time. Using an approach that categorizes weather patterns based on their similarities, here 10 unique snowfall-producing weather patterns are identified and analyzed from 1948 to 2021. Each pattern resulted in different snowfall amounts across the central United States, varying substantially spatially and within the calendar year. Approximately 65% of the time that these weather patterns occur, snowfall is observed in the region. The majority of snowfall-producing weather patterns are associated with low pressure systems, but in some regions up to 70% of snowfall is associated with instances of high pressure in which winds can cause upward motions associated with topography.
Abstract
Prior research evaluating snowfall conditions and temporal trends in the United States often acknowledges the role of various synoptic-scale weather systems in governing snowfall variability. While synoptic classifications have been performed in other regions of North America in applications to snowfall, there remains a need for enhanced understanding of the atmospheric mechanisms of snowfall in the central United States. Here we conduct a novel synoptic climatological investigation of the weather systems responsible for snowfall in the central United States from 1948 to 2021 focused on their identification and the quantification of associated snowfall totals and events. Ten unique synoptic weather types (SWTs) were identified, each resulting in distinct regions of enhanced snowfall across the study domain aligning with regions of sufficiently cold air temperatures and forcing mechanisms. While a substantial proportion of seasonal snowfall is attributed to SWTs associated with surface troughs and/or midlatitude cyclones, in portions of the southeastern and western study domain, as much as 70% of seasonal snowfall occurs during systems with high pressure centers as the domain’s synoptic-scale forcing. Easterly flow, potentially resulting in topographic uplift from high pressure to east of the domain, was associated with between 15% and 25% of seasonal snowfall in Nebraska and South Dakota. On average, 64.8% of the SWT occurrences resulted in snowfall within the study region, ranging between 40.1% and 93.5% by SWT. Synoptic climatological investigations provide valuable insights into the unique weather systems that generate hydroclimatic variability.
Significance Statement
By evaluating the weather patterns that are responsible for snowfall in the central United States, key insights can be gained into how and why snowfall varies and potentially changes over space and time. Using an approach that categorizes weather patterns based on their similarities, here 10 unique snowfall-producing weather patterns are identified and analyzed from 1948 to 2021. Each pattern resulted in different snowfall amounts across the central United States, varying substantially spatially and within the calendar year. Approximately 65% of the time that these weather patterns occur, snowfall is observed in the region. The majority of snowfall-producing weather patterns are associated with low pressure systems, but in some regions up to 70% of snowfall is associated with instances of high pressure in which winds can cause upward motions associated with topography.
Abstract
This study presents the first numerical simulations of seeded clouds over the Snowy Mountains of Australia. WRF-WxMod, a novel glaciogenic cloud-seeding model, was utilized to simulate the cloud response to winter orographic seeding under various meteorological conditions. Three cases during the 2018 seeding periods were selected for model evaluation, coinciding with an intensive ground-based measurement campaign. The campaign data were used for model validation and evaluation. Comparisons between simulations and observations demonstrate that the model realistically represents cloud structures, liquid water path, and precipitation. Sensitivity tests were performed to pinpoint key uncertainties in simulating natural and seeded clouds and precipitation processes. They also shed light on the complex interplay between various physical parameters/processes and their interaction with large-scale meteorology. Our study found that in unseeded scenarios, the warm and cold biases in different initialization datasets can heavily influence the intensity and phase of natural precipitation. Secondary ice production via Hallett–Mossop processes exerts a secondary influence. On the other hand, the seeding impacts are primarily sensitive to aerosol conditions and the natural ice nucleation process. Both factors alter the supercooled liquid water availability and the precipitation phase, consequently impacting the silver iodide (AgI) nucleation rate. Furthermore, model sensitivities were inconsistent across cases, indicating that no single model configuration optimally represents all three cases. This highlights the necessity of employing an ensemble approach for a more comprehensive and accurate assessment of the seeding impact.
Significance Statement
Winter orographic cloud seeding has been conducted for decades over the Snowy Mountains of Australia for securing water resources. However, this study is the first to perform cloud-seeding simulation for a robust, event-based seeding impact evaluation. A state-of-the-art cloud-seeding model (WRF-WxMod) was used to simulate the cloud seeding and quantified its impact on the region. The Southern Hemisphere, due to low aerosol emissions and highly pristine cloud conditions, has distinctly different cloud microphysical characteristics than the Northern Hemisphere, where WRF-WxMod has been successfully applied in a few regions over the United States. The results showed that WRF-WxMod could accurately capture the clouds and precipitation in both the natural and seeded conditions.
Abstract
This study presents the first numerical simulations of seeded clouds over the Snowy Mountains of Australia. WRF-WxMod, a novel glaciogenic cloud-seeding model, was utilized to simulate the cloud response to winter orographic seeding under various meteorological conditions. Three cases during the 2018 seeding periods were selected for model evaluation, coinciding with an intensive ground-based measurement campaign. The campaign data were used for model validation and evaluation. Comparisons between simulations and observations demonstrate that the model realistically represents cloud structures, liquid water path, and precipitation. Sensitivity tests were performed to pinpoint key uncertainties in simulating natural and seeded clouds and precipitation processes. They also shed light on the complex interplay between various physical parameters/processes and their interaction with large-scale meteorology. Our study found that in unseeded scenarios, the warm and cold biases in different initialization datasets can heavily influence the intensity and phase of natural precipitation. Secondary ice production via Hallett–Mossop processes exerts a secondary influence. On the other hand, the seeding impacts are primarily sensitive to aerosol conditions and the natural ice nucleation process. Both factors alter the supercooled liquid water availability and the precipitation phase, consequently impacting the silver iodide (AgI) nucleation rate. Furthermore, model sensitivities were inconsistent across cases, indicating that no single model configuration optimally represents all three cases. This highlights the necessity of employing an ensemble approach for a more comprehensive and accurate assessment of the seeding impact.
Significance Statement
Winter orographic cloud seeding has been conducted for decades over the Snowy Mountains of Australia for securing water resources. However, this study is the first to perform cloud-seeding simulation for a robust, event-based seeding impact evaluation. A state-of-the-art cloud-seeding model (WRF-WxMod) was used to simulate the cloud seeding and quantified its impact on the region. The Southern Hemisphere, due to low aerosol emissions and highly pristine cloud conditions, has distinctly different cloud microphysical characteristics than the Northern Hemisphere, where WRF-WxMod has been successfully applied in a few regions over the United States. The results showed that WRF-WxMod could accurately capture the clouds and precipitation in both the natural and seeded conditions.
Abstract
To compare the roles of two kinds of initial perturbations in a convection-permitting ensemble prediction system (CPEPS) and reveal the effects of the differences in large-scale/small-scale perturbation components on the CPEPS, three initial perturbation schemes are introduced, including a dynamical downscaling (DOWN) scheme originating from a coarse-resolution model, a multiscale ensemble transform Kalman filter (ETKF) scheme, and a filtered ETKF (ETKF_LARGE) scheme. First, the comparisons between the DOWN and ETKF schemes reveal that they behave differently in many ways. Specifically, the ensemble spread and forecast error for precipitation in the DOWN scheme are larger than those in the ETKF; the probabilistic forecasting skill for precipitation in the DOWN scheme is better than that in the ETKF at small neighborhood radii, whereas the advantages of the ETKF begin to appear as the neighborhood radius increases; DOWN possesses better spread–skill relationships than ETKF and has comparable probabilistic forecasting skills for nonprecipitation. Second, the comparisons between DOWN and ETKF_LARGE indicate that the differences in the large-scale initial perturbation components are key to the differences between DOWN and ETKF. Third, the comparisons between ETKF and ETKF_LARGE demonstrate that the small-scale initial perturbations are important since they can increase the precipitation spread in the early times and decrease the forecast errors while simultaneously improving the probabilistic forecasting skill for precipitation. Given the advantages of the DOWN and ETKF schemes and the importance of both large-scale and small-scale initial perturbations, multiscale initial perturbations should be constructed in future research.
Abstract
To compare the roles of two kinds of initial perturbations in a convection-permitting ensemble prediction system (CPEPS) and reveal the effects of the differences in large-scale/small-scale perturbation components on the CPEPS, three initial perturbation schemes are introduced, including a dynamical downscaling (DOWN) scheme originating from a coarse-resolution model, a multiscale ensemble transform Kalman filter (ETKF) scheme, and a filtered ETKF (ETKF_LARGE) scheme. First, the comparisons between the DOWN and ETKF schemes reveal that they behave differently in many ways. Specifically, the ensemble spread and forecast error for precipitation in the DOWN scheme are larger than those in the ETKF; the probabilistic forecasting skill for precipitation in the DOWN scheme is better than that in the ETKF at small neighborhood radii, whereas the advantages of the ETKF begin to appear as the neighborhood radius increases; DOWN possesses better spread–skill relationships than ETKF and has comparable probabilistic forecasting skills for nonprecipitation. Second, the comparisons between DOWN and ETKF_LARGE indicate that the differences in the large-scale initial perturbation components are key to the differences between DOWN and ETKF. Third, the comparisons between ETKF and ETKF_LARGE demonstrate that the small-scale initial perturbations are important since they can increase the precipitation spread in the early times and decrease the forecast errors while simultaneously improving the probabilistic forecasting skill for precipitation. Given the advantages of the DOWN and ETKF schemes and the importance of both large-scale and small-scale initial perturbations, multiscale initial perturbations should be constructed in future research.
Abstract
We have developed additive logistic models for the occurrence of lightning, large hail (≥2 cm), and very large hail (≥5 cm) to investigate the evolution of these hazards in the past, in the future, and for forecasting applications. The models, trained with lightning observations, hail reports, and predictors from atmospheric reanalysis, assign an hourly probability to any location and time on a 0.25° × 0.25° × 1-hourly grid as a function of reanalysis-derived predictor parameters, selected following an ingredients-based approach. The resulting hail models outperform the significant hail parameter, and the simulated climatological spatial distributions and annual cycles of lightning and hail are consistent with observations from storm report databases, radar, and lightning detection data. As a corollary result, CAPE released above the −10°C isotherm was found to be a more universally skillful predictor for large hail than CAPE. In the period 1950–2021, the models applied to the ERA5 reanalysis indicate significant increases of lightning and hail across most of Europe, primarily due to rising low-level moisture. The strongest modeled hail increases occur in northern Italy with increasing rapidity after 2010. Here, very large hail has become 3 times more likely than it was in the 1950s. Across North America trends are comparatively small, apart from isolated significant increases in the direct lee of the Rocky Mountains and across the Canadian plains. In the southern plains, a period of enhanced storm activity occurred in the 1980s and 1990s.
Abstract
We have developed additive logistic models for the occurrence of lightning, large hail (≥2 cm), and very large hail (≥5 cm) to investigate the evolution of these hazards in the past, in the future, and for forecasting applications. The models, trained with lightning observations, hail reports, and predictors from atmospheric reanalysis, assign an hourly probability to any location and time on a 0.25° × 0.25° × 1-hourly grid as a function of reanalysis-derived predictor parameters, selected following an ingredients-based approach. The resulting hail models outperform the significant hail parameter, and the simulated climatological spatial distributions and annual cycles of lightning and hail are consistent with observations from storm report databases, radar, and lightning detection data. As a corollary result, CAPE released above the −10°C isotherm was found to be a more universally skillful predictor for large hail than CAPE. In the period 1950–2021, the models applied to the ERA5 reanalysis indicate significant increases of lightning and hail across most of Europe, primarily due to rising low-level moisture. The strongest modeled hail increases occur in northern Italy with increasing rapidity after 2010. Here, very large hail has become 3 times more likely than it was in the 1950s. Across North America trends are comparatively small, apart from isolated significant increases in the direct lee of the Rocky Mountains and across the Canadian plains. In the southern plains, a period of enhanced storm activity occurred in the 1980s and 1990s.
Abstract
Surface-layer parameterizations for heat, mass, momentum, and turbulence exchange are a critical component of the land surface models (LSMs) used in weather prediction and climate models. Although formulations derived from Monin–Obukhov similarity theory (MOST) have long been used, bulk Richardson (Ri
b
) parameterizations have recently been suggested as a MOST alternative but have been evaluated over a limited number of land-cover and climate types. Examining the parameterizations’ applicability over other regions, particularly drylands that cover approximately 41% of terrestrial land surfaces, is a critical step toward implementing the parameterizations into LSMs. One year (1 January–31 December 2018) of eddy covariance measurements from a 10-m tower in southeastern Arizona and a 200-m tower in western Texas were used to determine how well the Ri
b
parameterizations for friction velocity (
Significance Statement
Weather forecasting models rely upon complex mathematical relationships to predict temperature, wind, and moisture. Monin–Obukhov similarity theory (MOST) has long been used to forecast these quantities near the land surface, even though MOST’s limitations are well known in the scientific community. Researchers have suggested an alternative to MOST called the bulk Richardson (Ri b ) approach. To allow for the Ri b approach to be used in weather forecasting models, the approach needs to be tested over different land-cover and climate types. In this study, we applied the Ri b approach to dry areas of the United States and found that the approach better represented turbulence variables than MOST relationships. These findings are an important step toward using Ri b relationships in weather forecasting models.
Abstract
Surface-layer parameterizations for heat, mass, momentum, and turbulence exchange are a critical component of the land surface models (LSMs) used in weather prediction and climate models. Although formulations derived from Monin–Obukhov similarity theory (MOST) have long been used, bulk Richardson (Ri
b
) parameterizations have recently been suggested as a MOST alternative but have been evaluated over a limited number of land-cover and climate types. Examining the parameterizations’ applicability over other regions, particularly drylands that cover approximately 41% of terrestrial land surfaces, is a critical step toward implementing the parameterizations into LSMs. One year (1 January–31 December 2018) of eddy covariance measurements from a 10-m tower in southeastern Arizona and a 200-m tower in western Texas were used to determine how well the Ri
b
parameterizations for friction velocity (
Significance Statement
Weather forecasting models rely upon complex mathematical relationships to predict temperature, wind, and moisture. Monin–Obukhov similarity theory (MOST) has long been used to forecast these quantities near the land surface, even though MOST’s limitations are well known in the scientific community. Researchers have suggested an alternative to MOST called the bulk Richardson (Ri b ) approach. To allow for the Ri b approach to be used in weather forecasting models, the approach needs to be tested over different land-cover and climate types. In this study, we applied the Ri b approach to dry areas of the United States and found that the approach better represented turbulence variables than MOST relationships. These findings are an important step toward using Ri b relationships in weather forecasting models.
Abstract
Rapid increases in the flash rate (FR) of a thunderstorm, so-called lightning jumps (LJs), have potential for nowcasting applications and to increase lead times for severe weather warnings. To date, there are some automated LJ algorithms that were developed and tuned for ground-based lightning locating systems. This study addresses the optimization of an automated LJ algorithm for the Geostationary Lightning Mapper (GLM) lightning observations from space. The widely used σ-LJ algorithm is used in its original form and in an adapted calculation including the footprint area of the storm cell (FRarea LJ algorithm). In addition, a new relative increase level (RIL) LJ algorithm is introduced. All algorithms are tested in different configurations, and detected LJs are verified against National Centers for Environmental Information severe weather reports. Overall, the FRarea algorithm with an activation FR threshold of 15 flashes per minute and a σ-level threshold of 1.0–1.5 as well as the RIL algorithm with FR threshold of 15 flashes per minute and RIL threshold of 1.1 are recommended. These algorithms scored the best critical success index (CSI) of ∼0.5, with a probability of detection of 0.6–0.7 and a false alarm ratio of ∼0.4. For daytime warm-season thunderstorms, the CSI can exceed 0.5, reaching 0.67 for storms observed during three consecutive days in April 2021. The CSI is generally lower at night and in winter.
Abstract
Rapid increases in the flash rate (FR) of a thunderstorm, so-called lightning jumps (LJs), have potential for nowcasting applications and to increase lead times for severe weather warnings. To date, there are some automated LJ algorithms that were developed and tuned for ground-based lightning locating systems. This study addresses the optimization of an automated LJ algorithm for the Geostationary Lightning Mapper (GLM) lightning observations from space. The widely used σ-LJ algorithm is used in its original form and in an adapted calculation including the footprint area of the storm cell (FRarea LJ algorithm). In addition, a new relative increase level (RIL) LJ algorithm is introduced. All algorithms are tested in different configurations, and detected LJs are verified against National Centers for Environmental Information severe weather reports. Overall, the FRarea algorithm with an activation FR threshold of 15 flashes per minute and a σ-level threshold of 1.0–1.5 as well as the RIL algorithm with FR threshold of 15 flashes per minute and RIL threshold of 1.1 are recommended. These algorithms scored the best critical success index (CSI) of ∼0.5, with a probability of detection of 0.6–0.7 and a false alarm ratio of ∼0.4. For daytime warm-season thunderstorms, the CSI can exceed 0.5, reaching 0.67 for storms observed during three consecutive days in April 2021. The CSI is generally lower at night and in winter.