Browse

You are looking at 21 - 30 of 8,099 items for :

  • Journal of Applied Meteorology and Climatology x
  • Refine by Access: Content accessible to me x
Clear All
Zachary J. Suriano
,
Gina R. Henderson
,
Julia Arthur
,
Kricket Harper
, and
Daniel J. Leathers

Abstract

Extreme snow ablation can greatly impact regional hydrology, affecting streamflow, soil moisture, and groundwater supplies. Relatively little is known about the climatology of extreme ablation events in the eastern United States, and the causal atmospheric forcing mechanisms behind such events. Studying the Susquehanna River basin over a 50-yr period, here we evaluate the variability of extreme ablation and river discharge events in conjunction with a synoptic classification and global-scale teleconnection pattern analysis. Results indicate that an average of 4.2 extreme ablation events occurred within the basin per year, where some 88% of those events resulted in an increase in river discharge when evaluated at a 3-day lag. Both extreme ablation and extreme discharge events occurred most frequently during instances of southerly synoptic-scale flow, accounting for 35.7% and 35.8% of events, respectively. However, extreme ablation was also regularly observed during high pressure overhead and rain-on-snow synoptic weather types. The largest magnitude of snow ablation per extreme event occurred during occasions of rain-on-snow, where a basinwide, areal-weighted 5.7 cm of snow depth was lost, approximately 23% larger than the average extreme event. Interannually, southerly flow synoptic weather types were more frequent during winter seasons when the Arctic and North Atlantic Oscillations were positively phased. Approximately 30% of the variance in rain-on-snow weather type frequency was explained by the Pacific–North American pattern. Evaluating the pathway of physical forcing mechanisms from regional events up through global patterns allows for improved understanding of the processes resulting in extreme ablation and discharge across the Susquehanna basin.

Significance Statement

The purpose of this study is to better understand how certain weather patterns are related to extreme snowmelt and streamflow events and what causes those weather patterns to vary with time. This is valuable information for informing hazard preparation and resource management within the basin. We found that weather patterns with southerly winds were the most frequent patterns responsible for extreme melt and streamflow, and those patterns occurred more often when the Arctic and North Atlantic Oscillations were in their “positive” configuration. Future work should consider the potential for these patterns, and related impacts, to change over time.

Open access
Riku Shimizu
,
Shoichi Shige
,
Toshio Iguchi
,
Cheng-Ku Yu
, and
Lin-Wen Cheng

Abstract

The Dual-Frequency Precipitation Radar (DPR), which consists of a Ku-band precipitation radar (KuPR) and a Ka-band precipitation radar (KaPR) on board the GPM Core Observatory, cannot observe precipitation at low altitudes near the ground contaminated by surface clutter. This near-surface region is called the blind zone. DPR estimates the clutter-free bottom (CFB), which is the lowest altitude not included in the blind zone, and estimates precipitation at altitudes higher than the CFB. High CFBs, which are common over mountainous areas, represent obstacles to detection of shallow precipitation and estimation of low-level enhanced precipitation. We compared KuPR data with rain gauge data from Da-Tun Mountain of northern Taiwan acquired from March 2014 to February 2020. A total of 12 cases were identified in which the KuPR missed some rainfall with intensity of >10 mm h−1 that was observed by rain gauges. Comparison of KuPR profile and ground-based radar profile revealed that shallow precipitation in the KuPR blind zone was missed because the CFB was estimated to be higher than the lower bound of the range free from surface echoes. In the original operational algorithm, CFB was estimated using only the received power data of the KuPR. In this study, the CFB was identified by the sharp increase in the difference between the received powers of the KuPR and the KaPR at altitude affected by surface clutter. By lowering the CFB, the KuPR succeeded in detection and estimation of shallow precipitation.

Significance Statement

The Dual-Frequency Precipitation Radar (DPR) on board the GPM Core Observatory cannot capture precipitation in the low-altitude region near the ground contaminated by surface clutter. This region is called the blind zone. The DPR estimates the clutter-free bottom (CFB), which is the lower bound of the range free from surface echoes, and uses data higher than CFB. DPR consists of a Ku-band precipitation radar (KuPR) and a Ka-band precipitation radar (KaPR). KuPR missed some shallow precipitation more than 10 mm h−1 in the blind zone over Da-Tun Mountain of northern Taiwan because of misjudged CFB estimation. Using both the KuPR and the KaPR, we improved the CFB estimation algorithm, which lowered the CFB, narrowed the blind zone, and improved the capability to detect shallow precipitation.

Open access
Athanasios Ntoumos
,
Panos Hadjinicolaou
,
George Zittis
,
Katiana Constantinidou
,
Anna Tzyrkalli
, and
Jos Lelieveld

Abstract

We assess the sensitivity of the Weather Research and Forecasting (WRF) Model to the use of different planetary boundary layer (PBL) parameterizations focusing on air temperature and extreme heat conditions. This work aims to evaluate the performance of the WRF Model in simulating temperatures across the Middle East–North Africa (MENA) domain, explain the model biases resulting from the choice of different PBL schemes, and identify the best-performing configuration for the MENA region. Three different PBL schemes are used to downscale the ECMWF ERA-Interim climate over the MENA region at a horizontal resolution of 24 km, for the period 2000–10. These are the Mellor–Yamada–Janjić (MYJ), Yonsei University (YSU), and the asymmetric convective model, version 2 (ACM2). For the evaluation of the WRF runs, we used related meteorological variables from the ERA5 reanalysis, including summer maximum and minimum 2-m air temperature and heat extreme indices. Our results indicate that simulations tend to overestimate maximum temperatures and underestimate minimum temperatures, and we find that model errors are very dependent on the geographic location. The possible physical causes of model biases are investigated through the analysis of additional variables (such as boundary layer height, moisture, and heat fluxes). It is shown that differences among the PBL schemes are associated with differences in vertical mixing strength, which alters the magnitude of the entrainment of free-tropospheric air into the PBL. The YSU is found to be the best-performing scheme, and it is recommended in WRF climate simulations for the MENA region.

Open access
Lea Hartl
,
Carl Schmitt
,
Telayna Wong
,
Dragos A. Vas
,
Lewis Enterkin
, and
Martin Stuefer

Abstract

Ice fog typically occurs at temperatures below approximately −30°C. Ice fog formation and persistence are affected by atmospheric processes at different spatial and temporal scales and can be influenced by anthropogenic activities that add vapor to the near-surface atmosphere. Based on meteorological observations from Fairbanks International Airport and Eielson Air Force Base (Alaska) from 1948/49 to 2021/22, we provide an overview of general ice fog climatology at the sites, changes over time, and synoptic-scale upper-level weather patterns common during ice fog occurrence. On average, ice fog occurrence has decreased by 60%–70% over the study period (median number of ice fog days at Fairbanks airport in the period 1950/51–1979/80: 16.5; median in the period 1990/91–2019/20: 6). The average lengths of ice fog events and of the ice fog season have also decreased. Trends are not linear, and rates of change vary over time. The greatest reduction in ice fog occurred during the 1970s and 1980s. Trends in ice fog hours roughly track decreasing trends in hours with cold temperatures. However, the percentage of cold hours in which ice fog occurs has decreased since approximately the 1980s. This result suggests that local changes in air pollution or near-surface moisture may also play an important role in trends in ice fog occurrence. We use self-organizing maps to assess recurring synoptic-scale weather patterns in the upper atmosphere during ice fog conditions in Fairbanks. Ice fog is typically associated with a northerly flow or low pressure gradients over the study area.

Significance Statement

We aim to show when and how often ice fog occurs in the Fairbanks region, how this has changed over time, and what kind of larger-scale weather patterns are common during ice fog. Ice fog strongly reduces visibility and represents a hazard to aviation and other traffic. The number of ice fog hours and days per winter has decreased substantially over the 70-yr period of record. Ice fog is, on average, less persistent now than in the past. The reduction is related to fewer days with cold temperatures, but changes in air pollution and other local factors may also play an important role. Further study is needed to fully attribute the causes of the observed changes.

Open access
Julia F. Lockwood
,
Nick Dunstone
,
Leon Hermanson
,
Geoffrey R. Saville
,
Adam A. Scaife
,
Doug Smith
, and
Hazel E. Thornton

Abstract

North Atlantic Ocean hurricane activity exhibits significant variation on multiannual time scales. Advance knowledge of periods of high activity would be beneficial to the insurance industry as well as society in general. Previous studies have shown that climate models initialized with current oceanic and atmospheric conditions, known as decadal prediction systems, are skillful at predicting North Atlantic hurricane activity averaged over periods of 2–10 years. We show that this skill also translates into skillful predictions of real-world U.S. hurricane damage. Using such systems, we have developed a prototype climate service for the insurance industry giving probabilistic forecasts of 5-yr-mean North Atlantic hurricane activity, measured by the total accumulated cyclone energy (ACE index), and 5-yr-total U.S. hurricane damage (given in U.S. dollars). Rather than tracking hurricanes in the decadal systems directly, the forecasts use a relative temperature index known to be strongly linked to hurricane activity. Statistical relationships based on past forecasts of the index and observed hurricane activity and U.S. damage are then used to produce probabilistic forecasts. The predictions of hurricane activity and U.S. damage for the period 2020–24 are high, with ∼95% probabilities of being above average. We note that skill in predicting the temperature index on which the forecasts are based has declined in recent years. More research is therefore needed to understand under which conditions the forecasts are most skillful.

Significance Statement

The purpose of this article is to explain the science and methods behind a recently developed prototype climate service that uses initialized climate models to give probabilistic forecasts of 5-yr-mean North Atlantic Ocean hurricane activity, as well as 5-yr-total associated U.S. hurricane damage. Although skill in predicting North Atlantic hurricane activity on this time scale has been known for some time, a key result in this article is showing that this also leads to predictability in real-world damage. These forecasts could be of benefit to the insurance industry and to society in general.

Open access
Genki Katata
,
Ronan Connolly
, and
Peter O’Neill

Abstract

To reduce the amount of nonclimatic biases of air temperature in each weather station’s record by comparing it with neighboring stations, global land surface air temperature datasets are routinely adjusted using statistical homogenization to minimize such biases. However, homogenization can unintentionally introduce new nonclimatic biases due to an often-overlooked statistical problem known as “urban blending” or “aliasing of trend biases.” This issue arises when the homogenization process inadvertently mixes urbanization biases of neighboring stations into the adjustments applied to each station record. As a result, urbanization biases of the original unhomogenized temperature records are spread throughout the homogenized data. To evaluate the extent of this phenomenon, the homogenized temperature data for two countries (Japan and the United States) are analyzed. Using the Japanese stations in the widely used Global Historical Climatology Network (GHCN) dataset, it is first confirmed that the unhomogenized Japanese temperature data are strongly affected by urbanization bias (possibly ∼60% of the long-term warming). The U.S. Historical Climatology Network (USHCN) dataset contains a relatively large amount of long, rural station records and therefore is less affected by urbanization bias. Nonetheless, even for this relatively rural dataset, urbanization bias could account for ∼20% of the long-term warming. It is then shown that urban blending is a major problem for the homogenized data for both countries. The IPCC’s estimate of urbanization bias in the global temperature data based on homogenized temperature records may have been low as a result of urban blending. Recommendations on how future homogenization efforts could be modified to reduce urban blending are discussed.

Significance Statement

Most weather station–based global land temperature datasets currently use a process called “statistical homogenization” to reduce the amount of nonclimatic biases. However, using temperature data from two countries (Japan and the United States), we show that the homogenization process unintentionally introduces new nonclimatic biases into the data as a result of an “urban blending” problem. Urban blending arises when the homogenization process inadvertently mixes the urbanization (warming) bias of the neighboring stations into the adjustments applied to each station record. As a result, the urbanization biases of the unhomogenized temperature records are spread throughout all of the homogenized data. The net effect tends to artificially add warming to rural stations and subtract warming from urban stations until all stations have about the same amount of urbanization bias.

Open access
Doyi Kim
,
Hee-Jae Kim
, and
Yong-Sang Choi

Abstract

Understanding the growth of tropical convective clouds (TCCs) is of vital importance for the early detection of heavy rainfall. This study explores the properties of TCCs that can cause them to develop into clouds with a high probability of precipitation. Remotely sensed cloud properties, such as cloud-top temperature (CTT), cloud optical thickness (COT), and cloud effective radius (CER) as measured by a geostationary satellite are trained by a neural network. First, the image segmentation algorithm identifies TCC objects with different cloud properties. Second, a self-organizing map (SOM) algorithm clusters TCC objects with similar cloud microphysical properties. Third, the precipitation probability (PP) for each cluster of TCCs is calculated based on the proportion of precipitating TCCs among the total number of TCCs. Precipitating TCCs can be distinguished from nonprecipitating TCCs using Integrated Multi-Satellite Retrievals for Global Precipitation Measurement precipitation data. Results show that SOM clusters with a high PP (>70%) satisfy a certain range of cloud properties: CER ≥ 20 μm and CTT < 230 K. PP generally increases with increasing COT, but COT cannot be a clear cloud property to confirm a high PP. For relatively thin clouds (COT < 30), however, CER should be much larger than 20 μm to have a high PP. More importantly, these TCC conditions associated with a PP ≥ 70% are consistent across regions and periods. We expect our results will be useful for satellite nowcasting of tropical precipitation using geostationary satellite cloud properties.

Significance Statement

We aim to identify the properties of tropical convective clouds (TCCs) that have a high precipitation probability. We designed a two-step framework to identify TCC objects and the conditions of cloud properties for TCCs to have a high precipitation probability. The TCCs with a precipitation probability > 70% tend to have a low cloud-top temperature and a cloud particle effective radius ≥ 20 μm. Cloud optical thicknesses are distributed over a wide range, but thinning requires a particle radius larger than 20 μm. These conditions of cloud properties appear to be unchanged under various spatial–temporal conditions over the tropics. This important observational finding advances our understanding of the cloud–precipitation relationship in TCCs and can be applied to satellite nowcasting of precipitation in the tropics, where numerical weather forecasts are limited.

Open access
Frédéric Fabry
,
Joseph Samuel
, and
Véronique Meunier

Abstract

In a future world where most of the energy must come from intermittent renewable energy sources such as wind or solar energy, it would be more efficient if, for each demand area, we could determine the locations for which the output of an energy source would naturally match the demand fluctuations from that area. In parallel, meteorological weather systems such as midlatitude cyclones are often organized in a way that naturally shapes where areas of greater energy need (e.g., regions with more cold air) are with respect to windier or sunnier areas, and these are generally not collocated. As a result, the best places to generate renewable energy may not be near consumption sites; these may be determined, however, by common meteorological patterns. Using data from a reanalysis of six decades of past weather, we determined the complementarity between different sources of energy as well as the relationships between renewable supply and demand at daily averaged time scales for several North American cities. In general, demand and solar power tend to be slightly positively correlated at nearby locations away from the Rocky Mountains; however, wind power often must be obtained from greater distances and at altitude for energy production to be better timed with consumption.

Significance Statement

Weather patterns such as high and low pressure systems shape where and when energy is needed for warming or cooling; they also shape how much renewable energy from winds and the sun can be produced. Hence, they determine the regions where more energy is likely to be available in periods of unusually high need for each demand location. Finding where those areas are may result in more timely renewable energy production in the future to help reduce fossil fuel use for energy production.

Open access
Yuekui Yang
,
Daniel Kiv
,
Surendra Bhatta
,
Manisha Ganeshan
,
Xiaomei Lu
, and
Stephen Palm

Abstract

This paper presents work using a machine learning model to diagnose Antarctic blowing snow (BLSN) properties with the Modern-Era Retrospective Analysis for Research and Applications, version 2 (MERRA-2), data. We adopt the random forest classifier for BLSN identification and the random forest regressor for BLSN optical depth and height diagnosis. BLSN properties observed from the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) are used as the truth for training the model. Using MERRA-2 fields such as snow age, surface elevation and pressure, temperature, specific humidity, and temperature gradient at the 2-m level, and wind speed at the 10-m level as input, reasonable results are achieved. Hourly blowing snow property diagnostics are generated with the trained model. Using 2010 as an example, it is shown that the Antarctic BLSN frequency is much higher over East than West Antarctica. High-frequency months are from April to September, during which BLSN frequency exceeds 20% over East Antarctica. For May 2010, the BLSN snow frequency in the region is as high as 37%. Due to the suppression by strong surface-based inversions, larger values of BLSN height and optical depth are usually limited to the coastal regions, wherein the strength of surface-based inversions is weaker.

Open access
Dazhi Xi
,
Ning Lin
,
Norberto C. Nadal-Caraballo
, and
Madison C. Yawn

Abstract

In this study, we design a statistical method to couple observations with a physics-based tropical cyclone (TC) rainfall model (TCR) and engineered-synthetic storms for assessing TC rainfall hazard. We first propose a bias-correction method to minimize the errors induced by TCR via matching the probability distribution of TCR-simulated historical TC rainfall with gauge observations. Then we assign occurrence probabilities to engineered-synthetic storms to reflect local climatology, through a resampling method that matches the probability distribution of a newly proposed storm parameter named rainfall potential (POT) in the synthetic dataset with that in the observation. POT is constructed to include several important storm parameters for TC rainfall such as TC intensity, duration, and distance and environmental humidity near landfall, and it is shown to be correlated with TCR-simulated rainfall. The proposed method has a satisfactory performance in reproducing the rainfall hazard curve in various locations in the continental United States; it is an improvement over the traditional joint probability method (JPM) for TC rainfall hazard assessment.

Open access