Browse

Georgios A. Efstathiou

Abstract

A scale-dependent dynamic Smagorinsky model is implemented in the Met Office/NERC Cloud (MONC) model using two averaging flavors, along Lagrangian pathlines and local moving averages. The dynamic approaches were compared against the conventional Smagorinsky–Lilly scheme in simulating the diurnal cycle of shallow cumulus convection. The simulations spanned from the LES to the near-gray-zone and gray-zone resolutions and revealed the adaptability of the dynamic model across the scales and different stability regimes. The dynamic model can produce a scale- and stability-dependent profile of the subfilter turbulence length scale across the chosen resolution range. At gray-zone resolutions the adaptive length scales can better represent the early precloud boundary layer leading to temperature and moisture profiles closer to the LES compared to the standard Smagorinsky. As a result, the initialization and general representation of the cloud field in the dynamic model is in good agreement with the LES. In contrast, the standard Smagorinsky produces a less well-mixed boundary layer, which fails to ventilate moisture from the boundary layer, resulting in the delayed spinup of the cloud layer. Moreover, strong downgradient diffusion controls the turbulent transport of scalars in the cloud layer. However, the dynamic approaches rely on the resolved field to account for nonlocal transports, leading to overenergetic structures when the boundary layer is fully developed and the Lagrangian model is used. Introducing the local averaging version of the model or adopting a new Lagrangian time scale provides stronger dissipation without significantly affecting model behavior.

Open access
James N. Moum
,
Daniel L. Rudnick
,
Emily L. Shroyer
,
Kenneth G. Hughes
,
Benjamin D. Reineman
,
Kyle Grindley
,
Jeffrey T. Sherman
,
Pavan Vutukur
,
Craig Van Appledorn
,
Kerry Latham
,
Aurélie J. Moulin
, and
T. M. Shaun Johnston

Abstract

A new autonomous turbulence profiling float has been designed, built, and tested in field trials off Oregon. Flippin’ χSOLO (FχS) employs a SOLO-II buoyancy engine that not only changes but also shifts ballast to move the center of mass to positions on either side of the center of buoyancy, thus causing FχS to flip. FχS is outfitted with a full suite of turbulence sensors—two shear probes, two fast thermistors, and pitot tube, as well as a pressure sensor and three-axis linear accelerometers. FχS descends and ascends with turbulence sensors leading, thereby permitting measurement through the sea surface. The turbulence sensors are housed antipodal from communication antennas so as to eliminate flow disturbance. By flipping at the sea surface, antennas are exposed for communications. The mission of FχS is to provide intensive profiling measurements of the upper ocean from 240 m and through the sea surface, particularly during periods of extreme surface forcing. While surfaced, accelerometers provide estimates of wave height spectra and significant wave height. From 3.5 day field trials, here we evaluate (i) the statistics from two FχS units and our established shipboard profiler, Chameleon, and (ii) FχS-based wave statistics by comparison to a nearby NOAA wave buoy.

Significance Statement

The oceanographic fleet of Argo autonomous profilers yields important data that define the state of the ocean’s interior. Continued deployments over time define the evolution of the ocean’s interior. A significant next step will be to include turbulence measurements on these profilers, leading to estimates of thermodynamic mixing rates that predict future states of the ocean’s interior. An autonomous turbulence profiler that employs the buoyancy engine, mission logic, and remote communication of one particular Argo float is described herein. The Flippin’ χSOLO is an upper-ocean profiler tasked with rapid and continuous profiling of the upper ocean during weather conditions that preclude shipboard profiling and that includes the upper 10 m that is missed by shipboard turbulence profilers.

Restricted access
David W. Pierce
,
Daniel R. Cayan
,
Daniel R. Feldman
, and
Mark D. Risser

Abstract

A new set of CMIP6 data downscaled using the localized constructed analogs (LOCA) statistical method has been produced, covering central Mexico through southern Canada at 6-km resolution. Output from 27 CMIP6 Earth system models is included, with up to 10 ensemble members per model and 3 SSPs (245, 370, and 585). Improvements from the previous CMIP5 downscaled data result in higher daily precipitation extremes, which have significant societal and economic implications. The improvements are accomplished by using a precipitation training dataset that better represents daily extremes and by implementing an ensemble bias correction that allows a more realistic representation of extreme high daily precipitation values in models with numerous ensemble members. Over southern Canada and the CONUS exclusive of Arizona (AZ) and New Mexico (NM), seasonal increases in daily precipitation extremes are largest in winter (∼25% in SSP370). Over Mexico, AZ, and NM, seasonal increases are largest in autumn (∼15%). Summer is the outlier season, with low model agreement except in New England and little changes in 5-yr return values, but substantial increases in the CONUS and Canada in the 500-yr return value. One-in-100-yr historical daily precipitation events become substantially more frequent in the future, as often as once in 30–40 years in the southeastern United States and Pacific Northwest by the end of the century under SSP 370. Impacts of the higher precipitation extremes in the LOCA version 2 downscaled CMIP6 product relative to the LOCA downscaled CMIP5 product, even for similar anthropogenic emissions, may need to be considered by end-users.

Restricted access
Free access
Reuben Demirdjian
,
James D. Doyle
,
Peter M. Finocchio
, and
Carolyn A. Reynolds

Abstract

The influence of the surface latent and surface sensible heat flux on the development and interaction of an idealized extratropical cyclone (termed “primary”) with an upstream cyclone (termed “upstream”) using the Navy’s Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS) is analyzed. The primary cyclone develops from an initial perturbation to a baroclinically unstable jet stream, while the upstream cyclone results from Rossby wave dispersion at the surface where a bottom-up style development occurs. The intensity of the upstream cyclone is strongly enhanced by surface latent heat fluxes and, to a lesser degree, by surface sensible heat fluxes. Forward trajectories initiated from the postfrontal sector of the primary cyclone travel south of the upstream anticyclone and feed into the atmospheric river and warm conveyor belt region of the upstream cyclone. Substantial moistening of this airstream is a result of upward surface latent heat flux present in both the primary cyclone’s postfrontal sector and along the southern flank of the anticyclone. Backward trajectories initiated from the same region show that these air parcels originate from a broad area north of both the anticyclone and the primary cyclone in the lower troposphere. The airstream identified represents a new pathway through which dry, descending air that is preconditioned through surface moistening enhances the development of an upstream cyclone through diabatically generated potential vorticity.

Restricted access
Jonathan L. Case
,
Patrick N. Gatlin
,
Jayanthi Srikishen
,
Bhupesh Adhikary
,
Md. Abdul Mannan
, and
Jordan R. Bell

Abstract

Some of the most intense thunderstorms on Earth occur in the Hindu Kush Himalaya (HKH) region of southern Asia—where many organizations lack the capacity needed to predict, observe, and/or effectively respond to the threats associated with high-impact convective weather. As a result, a disproportionately large number of casualties and damage often occur with premonsoon severe thunderstorms in this region. To address this problem, we combined ensemble numerical weather prediction (NWP), satellite-based precipitation products, and land-imagery techniques into a High-Impact Weather Assessment Toolkit (HIWAT) customized for HKH. In 2018 and 2019 demonstrations, a regional convection-allowing ensemble NWP system was configured to provide real-time probabilistic guidance of thunderstorm hazards over HKH, applying ensemble techniques developed for U.S.-focused experiments. Case studies of damaging wind, large hail, lightning, a rare Nepalese tornado, and landfalling tropical cyclone events show how HIWAT efficiently packages ensemble output into products that are readily interpreted by forecasters in HKH. Precipitation and total lightning flash verification reveal the highest skill occurred where deep convection was most frequently observed in Bangladesh and northeastern India, and verification scores exceeded global ensemble scores for heavy precipitation rates. These results demonstrate that plausible forecasts of thunderstorm hazards can be attained with relatively low computational resources, thereby facilitating advancements in extreme weather forecasting services in historically underserved regions such as HKH. In early 2022, a custom version of HIWAT was installed at the Bangladesh Meteorological Department using in-house computational resources, providing regional ensemble forecast guidance in real time.

Free access
L. Schneider
,
O. Konter
,
J. Esper
, and
K.J. Anchukaitis

Abstract

Since the Paris Agreement, climate policy has focused on 1.5 and 2°C maximum global warming targets. However, the agreement lacks a formal definition of the 19th century “pre-industrial” temperature baseline for these targets. If global warming is estimated with respect to the 1850-1900 mean, as in the latest IPCC reports, uncertainty in early instrumental temperatures affects the quantification of total warming. Here, we analyse gridded datasets of instrumental observations together with large-scale climate reconstructions from tree-rings to evaluate 19th century baseline temperatures. From 1851-1900 warm season temperatures of the Northern Hemisphere extratropical landmasses were 0.20°C cooler than the 20th century mean, with a range of 0.14-0.26°C among three instrumental datasets. At the same time, proxy-based temperature reconstructions show on average 0.39°C colder conditions with a range of 0.19-0.55°C among six records. We show that anomalously low reconstructed temperatures at high latitudes are under-represented in the instrumental fields likely due to the lack of station records in these remote regions. The 19th century offset between warmer instrumental and colder reconstructed temperatures is reduced by one third if spatial coverage is reduced to those grid cells that overlap between the different temperature fields. The instrumental dataset from Berkeley Earth shows the smallest offset to the reconstructions indicating that additional stations included in this product, due to more liberal data selection, lead to cooler baseline temperatures. The limited early instrumental records and comparison with reconstructions suggest an overestimation of 19th century temperatures, which in turn further reduces the probability of achieving the Paris targets.

Restricted access
Anamika Shreevastava
,
Colin Raymond
, and
Glynn C. Hulley

Abstract

Heatwaves in California manifest as both dry and humid events. While both forms have become more prevalent, recent studies have identified a shift towards more humid events. Understanding the complex interactions of each heatwave type with the urban heat island are crucial for impacts, but remain understudied. Here, we address this gap by contrasting how dry versus humid heatwaves shape the intra-urban heat of greater Los Angeles (LA) area. We used a consecutive contrasting set of heatwaves from 2020 as a case study: a prolonged humid heatwave in August and an extremely dry heatwave in September. We used MERRA2 reanalysis data to compare mesoscale dynamics, followed by high-resolution Weather Research Forecast modeling over urbanized Southern California. We employ moist thermodynamic variables to quantify heat stress and perform spatial clustering analysis to characterize the spatiotemporal intra-urban variability. We find that despite temperatures being 10±3°C hotter in the September heatwave, the wet bulb temperature, closely related to the risk of human heat stroke, was higher in August. While dry and humid heat display different spatial patterns, three distinct spatial clusters emerge based on non-heatwave local climates. But both types of heatwaves diminish the intra-urban heat stress variability. Valley areas such as San Bernardino and Riverside experience the worst impacts with up to 6±0.5°C of additional heat stress during heatwave nights. Our results highlight the need to account for the disparity in small-scale heatwave patterns across urban neighborhoods in designing policies for equitable climate action.

Restricted access
Julien Brajard
,
François Counillon
,
Yiguo Wang
, and
Madlen Kimmritz

Abstract

Dynamical climate predictions are produced by assimilating observations and running ensemble simulations of Earth system models. This process is time-consuming and by the time the forecast is delivered, new observations are already available, making it obsolete from the release date. Moreover, producing such predictions is computationally demanding, and their production frequency is restricted. We tested the potential of a computationally cheap weighting average technique that can continuously adjust such probabilistic forecast—in between production intervals — using newly available data. The method estimates local positive weights computed with a Bayesian framework, favoring members closer to observations. We tested the approach with the Norwegian Climate Prediction Model (NorCPM), which assimilates monthly sea surface temperature (SST) and hydrographic profiles with the ensemble Kalman filter. By the time the NorCPM forecast is delivered operationally, a week of unused SST data is available. We demonstrate the benefit of our weighting method on retrospective hindcasts. The weighting method greatly enhanced the NorCPM hindcast skill compared to the standard equal weight approach up to a 2-month lead time (global correlation of 0.71 versus 0.55 at a 1-month lead time and 0.51 versus 0.45 at a 2-month lead time). The skill at a 1-month lead time is comparable to the accuracy of the EnKF analysis. We also show that weights determined using SST data can be used to improve the skill of other quantities, such as the sea-ice extent. Our approach can provide a continuous forecast between the intermittent forecast production cycle and be extended to other independent datasets.

Restricted access
Leah Johnson
,
Baylor Fox-Kemper
,
Qing Li
,
Hieu T. Pham
, and
Sutanu Sarkar

Abstract

This work evaluates the fidelity of various upper ocean turbulence parameterizations subject to realistic monsoon forcing and presents a finite-time ensemble vector (EV) method to better manage the design and numerical principles of these parameterizations. The EV method emphasizes the dynamics of a turbulence closure multi-model ensemble and is applied to evaluate ten different ocean surface boundary layer (OSBL) parameterizations within a single column (SC) model against two boundary layer large eddy simulations (LES). Both LES include realistic surface forcing, but one includes wind-driven shear turbulence only, while the other includes additional Stokes forcing through the wave-average equations that generates Langmuir turbulence. The finite-time EV framework focuses on what constitutes the local behavior of the mixed layer dynamical system and isolates the forcing and ocean state conditions where turbulence parameterizations most disagree. Identifying disagreement provides the potential to evaluate SC models comparatively against the LES. Observations collected during the 2018 Monsoon onset in the Bay of Bengal provide a case study to evaluate models under realistic and variable forcing conditions. The case study results highlight two regimes where models disagree a) during wind-driven deepening of the mixed layer and b) under strong diurnal forcing.

Restricted access