Browse

You are looking at 71 - 80 of 9,988 items for :

  • Journal of Applied Meteorology and Climatology x
  • Refine by Access: All Content x
Clear All
Andrew J. Heymsfield
,
Micael A. Cecchini
,
Andrew Detwiler
,
Ryan Honeyager
, and
Paul Field

Abstract

Measurements from the South Dakota School of Mines and Technology T-28 hail-penetrating aircraft are analyzed using recently developed data processing techniques with the goals of identifying where the large hail is found relative to vertical motion and improving the detection of hail microphysical properties from radar. Hail particle size distributions (PSD) and environmental conditions (temperature, relative humidity, liquid water content, air vertical velocity) were digitally collected by the T28 between 1995 and 2003 and synthesized by Detwiler et al. The PSD were forward modeled by Cecchini et al. to simulate the radar reflectivity of the PSD at multiple radar wavelengths. The T-28 penetrated temperatures primarily between 0° and −10°C. The largest hailstones were sampled near the updraft/downdraft interface. Liquid water contents were highest in the updraft cores, whereas total (liquid + frozen) water contents were highest near the updraft/downdraft interface. The fitted properties of the PSD (intercept and slope) are directly related to each other but do not show any dependence on the region of the hailstorm where sampled. The PSD measurements and the radar reflectivity calculations at multiple radar wavelengths facilitated the development of relationships between the PSD bulk properties—hail kinetic energy and kinetic energy flux—and the radar reflectivity. Rather than using the oft-assumed sphericity and solid ice physical properties, actual measurements of hail properties are used in the analysis. Results from the maximum estimated size of hail (MESH) and vertical integrated liquid water (VIL) algorithms are evaluated based on this analysis.

Significance Statement

Hailstorms in the United States have caused over $10 billion in damage for each of the last 14 years, according to insurance industry estimates (Heymsfield and Giammanco 2020). Algorithms have been developed to identify the presence and size of hail from radar. Numerical simulations of hailstorms have improved significantly since the 1970s, and further improvements will provide better resolution and more accurate estimates of the sizes of hailstones falling to the ground. Measurements of the properties of hailstones—their mass and terminal velocities—have been improved in recent years but in general are not incorporated in the algorithms developed for radar estimates of hail sizes or for the properties of hail used in the model simulations. This study synthesizes in situ aircraft data, computed radar backscatter cross sections, together with recent estimates of the physical characteristics of hailstones to improve the representation of hail in numerical models and quantitative assessment hail properties in storms using weather radar.

Restricted access
Stephen Jewson

Abstract

We use a simple risk model for U.S. hurricane wind and surge economic damage to investigate the impact of projected changes in the frequencies of hurricanes of different intensities due to climate change. For average annual damage, we find that changes in the frequency of category-4 storms dominate. For distributions of annual damage, we find that changes in the frequency of category-4 storms again dominate for all except the shortest return periods. Sensitivity tests show that accounting for landfall, uncertainties, and correlations leads to increases in damage estimates. When we propagate the distributions of uncertain frequency changes to give a best estimate of the changes in damage, the changes are moderate. When we pick individual scenarios from within the distributions of frequency changes, we find a significant probability of much larger changes in damage. The inputs on which our study depends are highly uncertain, and our methods are approximate, leading to high levels of uncertainty in our results. Also, the damage changes we consider are only part of the total possible change in hurricane damage due to climate change. Total damage change estimates would also need to include changes due to other factors, including possible changes in genesis, tracks, size, forward speed, sea level, rainfall, and exposure. Nevertheless, we believe that our results give important new insights into U.S. hurricane risk under climate change.

Significance Statement

We investigate how changes in the frequencies of hurricanes of different intensities as a result of climate change may contribute to changes in U.S. economic damage due to wind and surge. We find that economic damage will likely increase as a result of projected increases in the frequency of landfalling hurricanes. Analysis of our results shows that increases in the frequency of category-4 storms are the main driver of the changes. Our best estimate results, based on a multimodel ensemble, give modest increases in damage, but within the ensemble there are individual scenarios that give much larger increases in damage. The large range of individual damage estimates is a motivation for continuing efforts to reduce the uncertainty around hurricane projections under climate change.

Restricted access
Takuto Sato
and
Hiroyuki Kusaka

Abstract

This study focuses on the application of two standard inflow turbulence generation methods for growing convective boundary layer (CBL) simulations: the recycle–rescale (R-R) and the digital filter–based (DF) methods, which are used in computational fluid dynamics. The primary objective of this study is to expand the applicability of the R-R method to simulations of thermally driven CBLs. This method is called the extended R-R method. However, in previous studies, the DF method has been extended to generate potential temperature perturbations. This study investigated whether the extended DF method can be applied to simulations of growing thermally driven CBLs. In this study, idealized simulations of growing thermally driven CBLs using the extended R-R and DF methods were performed. The results showed that both extended methods could capture the characteristics of thermally driven CBLs. The extended R-R method reproduced turbulence in thermally driven CBLs better than the extended DF method in the spectrum and histogram of vertical wind speed. However, the height of the thermally driven CBL was underestimated in about 100 m compared with the extended DF method. Sensitivity experiments were conducted on the parameters used in the extended DF and R-R methods. The results showed that underestimation of the length scale in the extended DF method causes a shortage of large-scale turbulence components. The other point suggested by the results of the sensitivity experiments is that the length of the driver region in the extended R-R method should be sufficient to reproduce the spanwise movement of the roll vortices.

Significance Statement

Inflow turbulence generation methods for large-eddy simulation (LES) models are crucial for the better downscaling of meteorological mesoscale models (RANS models) to microscale models (LES models). Various CFD methods have been developed, but few have been applied to simulations of thermally driven convective boundary layers (CBLs). To address this problem, we focused on a method that recycles turbulence [the recycle–rescale (R-R) method] and another method that synthetically generates turbulence [the digital filter–based (DF) method]. This study extends the R-R method to manage turbulence in thermally driven CBLs. In addition, this study investigated the applicability of the DF method to thermally driven CBL simulations. Both extended methods are effective for downscaling experiments and capture the characteristics of thermally driven CBLs.

Restricted access
Linye Song
,
Lu Yang
,
Conglan Cheng
,
Aru Hasi
, and
Mingxuan Chen

Abstract

This study investigates the impacts of grid spacing and station network on surface analyses and forecasts including temperature, humidity, and winds in Beijing Winter Olympic complex terrain. The high-resolution analyses are generated by a rapid-refresh integrated system that includes a topographic downscaling procedure. Results show that surface analyses are more accurate with a higher targeted grid spacing. In particular, the average analysis errors of surface temperature, humidity, and winds are all significantly reduced when the grid size is increased. This improvement is mainly attributed to a more realistic simulation of the topographic effects in the integrated system because the topographic downscaling at higher grid spacing can add more details in a complex mountain region. From 1 km to 100 m, 1–12-h forecasts of temperature and humidity are also largely improved, while the wind only shows a slight improvement for 1–6-h forecasts. The influence of station network on the surface analyses is further examined. Results show that the spatial distributions of temperature and humidity at a 100-m space scale are more realistic and accurate when adding an intensive automatic weather station network, as more observational information can be absorbed. The adding of a station network can also reduce forecast errors, which can last for about 6 h. However, although surface winds display better analysis skill when more stations are added, the wind at the mountaintop region sometimes encounters a marginally worse effect for both analysis and forecast. The results are helpful to improve the analysis and forecast products in complex terrain and have some implications for downscaling from a coarse grid size to a finer grid.

Restricted access
Trent W. Ford
,
Jason A. Otkin
,
Steven M. Quiring
,
Joel Lisonbee
,
Molly Woloszyn
,
Junming Wang
, and
Yafang Zhong

Abstract

Increased flash drought awareness in recent years has motivated the development of numerous indicators for monitoring, early warning, and assessment. The flash drought indicators can act as a complementary set of tools by which to inform flash drought response and management. However, the limitations of each indicator much be measured and communicated between research and practitioners to ensure effectiveness. The limitations of any flash drought indicator are better understood and overcome through assessment of indicator sensitivity and consistency; however, such assessment cannot assume any single indicator properly represents the flash drought “truth.” To better understand the current state of flash drought monitoring, this study presents an intercomparison of nine, widely used flash drought indicators. The indicators represent perspectives and processes that are known to drive flash drought, including evapotranspiration and evaporative demand, precipitation, and soil moisture. We find no single flash drought indicator consistently outperforms all others across the contiguous United States. We do find the evaporative demand- and evapotranspiration-driven indicators tend to lead precipitation- and soil moisture-based indicators in flash drought onset, but also tend to produce more flash drought events collectively. Overall, the regional and definition-specific variability in results supports the argument for a multi-indicator approach for flash drought monitoring, as advocated by recent studies. Furthermore, flash drought research—especially evaluation of historical and potential future changes in flash drought characteristics—should test multiple indicators, datasets, and methods for representing flash drought, and ideally employ a multi-indicator analysis framework over use of a single indicator from which to infer all flash drought information.

Significance Statement

Rapid onset or “flash” drought has been an increasing concern globally, with quickly intensifying impacts to agriculture, ecosystems, and water resources. Many tools and indicators have been developed to monitor and provide early warning for flash drought, ideally resulting in more time for effective mitigation and reduced impacts. However, there remains no widely accepted single method for defining, monitoring, and measuring flash drought, which means most indicators that are developed are compared with other individual indicators or conditions and impacts in one or two flash drought events. In this study, we measure the state of flash drought monitoring through an intercomparison of nine, widely used flash drought indicators that represent different aspects of flash drought. We find that no single flash drought indicator outperformed all others and suggest that a comprehensive flash drought monitor should leverage multiple, complementary indicators, datasets, and methods. Furthermore, we suggest flash drought research—especially that which reflects on historical or projected changes in flash drought characteristics—should seek multiple indicators, datasets, and methods for analyses, thereby reducing the potentially confounding effects of sensitivity to a single indicator.

Open access
Martin Ridal
,
Jana Sanchez-Arriola
, and
Mats Dahlbom

Abstract

The use of radial velocity information from the European weather radar network is a challenging task, because of a heterogeneous radar network and the different ways of providing the Doppler velocity information. Preprocessing is therefore needed to harmonize the data. Radar observations consist of a very high resolution dataset, which means that it is both demanding to process as well as that the inherent resolution is much higher than the model resolution. One way of reducing the number of data is to create “super observations” (SO) by averaging observations in a predefined area. This paper describes the preprocessing necessary to use radar radial velocities in the data assimilation where the SO construction is included. Our main focus is to optimize the use of radial velocities in the HARMONIE–AROME numerical weather model. Several experiments were run to find the best settings for first-guess check limits as well as a tuning of the observation error value. The optimal size of the SO and the corresponding thinning distance for radar radial velocities was also studied. It was found that the radial velocity information and the reflectivity from weather radars can be treated differently when it comes to the size of the SO and the thinning. A positive impact was found when adding the velocities together with the reflectivity using the same SO size and thinning distance, but the best results were found when the SO and thinning distance for the radial velocities are smaller than the corresponding values for reflectivity.

Open access
Zachary J. Suriano
,
Charles Loewy
, and
Jamie Uz

Abstract

Prior research evaluating snowfall conditions and temporal trends in the United States often acknowledges the role of various synoptic-scale weather systems in governing snowfall variability. While synoptic classifications have been performed in other regions of North America in applications to snowfall, there remains a need for enhanced understanding of the atmospheric mechanisms of snowfall in the central United States. Here we conduct a novel synoptic climatological investigation of the weather systems responsible for snowfall in the central United States from 1948 to 2021 focused on their identification and the quantification of associated snowfall totals and events. Ten unique synoptic weather types (SWTs) were identified, each resulting in distinct regions of enhanced snowfall across the study domain aligning with regions of sufficiently cold air temperatures and forcing mechanisms. While a substantial proportion of seasonal snowfall is attributed to SWTs associated with surface troughs and/or midlatitude cyclones, in portions of the southeastern and western study domain, as much as 70% of seasonal snowfall occurs during systems with high pressure centers as the domain’s synoptic-scale forcing. Easterly flow, potentially resulting in topographic uplift from high pressure to east of the domain, was associated with between 15% and 25% of seasonal snowfall in Nebraska and South Dakota. On average, 64.8% of the SWT occurrences resulted in snowfall within the study region, ranging between 40.1% and 93.5% by SWT. Synoptic climatological investigations provide valuable insights into the unique weather systems that generate hydroclimatic variability.

Significance Statement

By evaluating the weather patterns that are responsible for snowfall in the central United States, key insights can be gained into how and why snowfall varies and potentially changes over space and time. Using an approach that categorizes weather patterns based on their similarities, here 10 unique snowfall-producing weather patterns are identified and analyzed from 1948 to 2021. Each pattern resulted in different snowfall amounts across the central United States, varying substantially spatially and within the calendar year. Approximately 65% of the time that these weather patterns occur, snowfall is observed in the region. The majority of snowfall-producing weather patterns are associated with low pressure systems, but in some regions up to 70% of snowfall is associated with instances of high pressure in which winds can cause upward motions associated with topography.

Open access
Free access
Sisi Chen
,
Lulin Xue
,
Sarah Tessendorf
,
Thomas Chubb
,
Andrew Peace
,
Luis Ackermann
,
Artur Gevorgyan
,
Yi Huang
,
Steven Siems
,
Roy Rasmussen
,
Suzanne Kenyon
, and
Johanna Speirs

Abstract

This study presents the first numerical simulations of seeded clouds over the Snowy Mountains of Australia. WRF-WxMod, a novel glaciogenic cloud-seeding model, was utilized to simulate the cloud response to winter orographic seeding under various meteorological conditions. Three cases during the 2018 seeding periods were selected for model evaluation, coinciding with an intensive ground-based measurement campaign. The campaign data were used for model validation and evaluation. Comparisons between simulations and observations demonstrate that the model realistically represents cloud structures, liquid water path, and precipitation. Sensitivity tests were performed to pinpoint key uncertainties in simulating natural and seeded clouds and precipitation processes. They also shed light on the complex interplay between various physical parameters/processes and their interaction with large-scale meteorology. Our study found that in unseeded scenarios, the warm and cold biases in different initialization datasets can heavily influence the intensity and phase of natural precipitation. Secondary ice production via Hallett–Mossop processes exerts a secondary influence. On the other hand, the seeding impacts are primarily sensitive to aerosol conditions and the natural ice nucleation process. Both factors alter the supercooled liquid water availability and the precipitation phase, consequently impacting the silver iodide (AgI) nucleation rate. Furthermore, model sensitivities were inconsistent across cases, indicating that no single model configuration optimally represents all three cases. This highlights the necessity of employing an ensemble approach for a more comprehensive and accurate assessment of the seeding impact.

Significance Statement

Winter orographic cloud seeding has been conducted for decades over the Snowy Mountains of Australia for securing water resources. However, this study is the first to perform cloud-seeding simulation for a robust, event-based seeding impact evaluation. A state-of-the-art cloud-seeding model (WRF-WxMod) was used to simulate the cloud seeding and quantified its impact on the region. The Southern Hemisphere, due to low aerosol emissions and highly pristine cloud conditions, has distinctly different cloud microphysical characteristics than the Northern Hemisphere, where WRF-WxMod has been successfully applied in a few regions over the United States. The results showed that WRF-WxMod could accurately capture the clouds and precipitation in both the natural and seeded conditions.

Restricted access
Jingzhuo Wang
,
Jing Chen
,
Hanbin Zhang
,
Ruoyun Ma
, and
Fajing Chen

Abstract

To compare the roles of two kinds of initial perturbations in a convection-permitting ensemble prediction system (CPEPS) and reveal the effects of the differences in large-scale/small-scale perturbation components on the CPEPS, three initial perturbation schemes are introduced, including a dynamical downscaling (DOWN) scheme originating from a coarse-resolution model, a multiscale ensemble transform Kalman filter (ETKF) scheme, and a filtered ETKF (ETKF_LARGE) scheme. First, the comparisons between the DOWN and ETKF schemes reveal that they behave differently in many ways. Specifically, the ensemble spread and forecast error for precipitation in the DOWN scheme are larger than those in the ETKF; the probabilistic forecasting skill for precipitation in the DOWN scheme is better than that in the ETKF at small neighborhood radii, whereas the advantages of the ETKF begin to appear as the neighborhood radius increases; DOWN possesses better spread–skill relationships than ETKF and has comparable probabilistic forecasting skills for nonprecipitation. Second, the comparisons between DOWN and ETKF_LARGE indicate that the differences in the large-scale initial perturbation components are key to the differences between DOWN and ETKF. Third, the comparisons between ETKF and ETKF_LARGE demonstrate that the small-scale initial perturbations are important since they can increase the precipitation spread in the early times and decrease the forecast errors while simultaneously improving the probabilistic forecasting skill for precipitation. Given the advantages of the DOWN and ETKF schemes and the importance of both large-scale and small-scale initial perturbations, multiscale initial perturbations should be constructed in future research.

Restricted access