Search Results

You are looking at 11 - 20 of 32 items for

  • Author or Editor: David Hall x
  • Refine by Access: All Content x
Clear All Modify Search
Jesse Norris
,
Alex Hall
,
J. David Neelin
,
Chad W. Thackeray
, and
Di Chen

Abstract

Daily and subdaily precipitation extremes in historical phase 6 of the Coupled Model Intercomparison Project (CMIP6) simulations are evaluated against satellite-based observational estimates. Extremes are defined as the precipitation amount exceeded every x years, ranging from 0.01 to 10, encompassing the rarest events that are detectable in the observational record without noisy results. With increasing temporal resolution there is an increased discrepancy between models and observations: for daily extremes, the multimodel median underestimates the highest percentiles by about a third, and for 3-hourly extremes by about 75% in the tropics. The novelty of the current study is that, to understand the model spread, we evaluate the 3D structure of the atmosphere when extremes occur. In midlatitudes, where extremes are simulated predominantly explicitly, the intuitive relationship exists whereby higher-resolution models produce larger extremes (r = −0.49), via greater vertical velocity. In the tropics, the convective fraction (the fraction of precipitation simulated directly from the convective scheme) is more relevant. For models below 60% convective fraction, precipitation amount decreases with convective fraction (r = −0.63), but above 75% convective fraction, this relationship breaks down. In the lower-convective-fraction models, there is more moisture in the lower troposphere, closer to saturation. In the higher-convective-fraction models, there is deeper convection and higher cloud tops, which appears to be more physical. Thus, the low-convective models are mostly closer to the observations of extreme precipitation in the tropics, but likely for the wrong reasons. These intermodel differences in the environment in which extremes are simulated hold clues into how parameterizations could be modified in general circulation models to produce more credible twenty-first-century projections.

Full access
Alex Hall
,
Amy Clement
,
David W. J. Thompson
,
Anthony Broccoli
, and
Charles Jackson

Abstract

Milankovitch proposed that variations in the earth’s orbit cause climate variability through a local thermodynamic response to changes in insolation. This hypothesis is tested by examining variability in an atmospheric general circulation model coupled to an ocean mixed layer model subjected to the orbital forcing of the past 165 000 yr. During Northern Hemisphere summer, the model’s response conforms to Milankovitch’s hypothesis, with high (low) insolation generating warm (cold) temperatures throughout the hemisphere. However, during Northern Hemisphere winter, the climate variations stemming from orbital forcing cannot be solely understood as a local thermodynamic response to radiation anomalies. Instead, orbital forcing perturbs the atmospheric circulation in a pattern bearing a striking resemblance to the northern annular mode, the primary mode of simulated and observed unforced atmospheric variability. The hypothesized reason for this similarity is that the circulation response to orbital forcing reflects the same dynamics generating unforced variability. These circulation anomalies are in turn responsible for significant fluctuations in other climate variables: Most of the simulated orbital signatures in wintertime surface air temperature over midlatitude continents are directly traceable not to local radiative forcing, but to orbital excitation of the northern annular mode. This has paleoclimate implications: during the point of the model integration corresponding to the last interglacial (Eemian) period, the orbital excitation of this mode generates a 1°–2°C warm surface air temperature anomaly over Europe, providing an explanation for the warm anomaly of comparable magnitude implied by the paleoclimate proxy record. The results imply that interpretations of the paleoclimate record must account for changes in surface temperature driven not only by changes in insolation, but also by perturbations in atmospheric dynamics.

Full access
Peter D. Dueben
,
Martin G. Schultz
,
Matthew Chantry
,
David John Gagne II
,
David Matthew Hall
, and
Amy McGovern

Abstract

Benchmark datasets and benchmark problems have been a key aspect for the success of modern machine learning applications in many scientific domains. Consequently, an active discussion about benchmarks for applications of machine learning has also started in the atmospheric sciences. Such benchmarks allow for the comparison of machine learning tools and approaches in a quantitative way and enable a separation of concerns for domain and machine learning scientists. However, a clear definition of benchmark datasets for weather and climate applications is missing with the result that many domain scientists are confused. In this paper, we equip the domain of atmospheric sciences with a recipe for how to build proper benchmark datasets, a (nonexclusive) list of domain-specific challenges for machine learning is presented, and it is elaborated where and what benchmark datasets will be needed to tackle these challenges. We hope that the creation of benchmark datasets will help the machine learning efforts in atmospheric sciences to be more coherent, and, at the same time, target the efforts of machine learning scientists and experts of high-performance computing to the most imminent challenges in atmospheric sciences. We focus on benchmarks for atmospheric sciences (weather, climate, and air-quality applications). However, many aspects of this paper will also hold for other aspects of the Earth system sciences or are at least transferable.

Significance Statement

Machine learning is the study of computer algorithms that learn automatically from data. Atmospheric sciences have started to explore sophisticated machine learning techniques and the community is making rapid progress on the uptake of new methods for a large number of application areas. This paper provides a clear definition of so-called benchmark datasets for weather and climate applications that help to share data and machine learning solutions between research groups to reduce time spent in data processing, to generate synergies between groups, and to make tool developments more targeted and comparable. Furthermore, a list of benchmark datasets that will be needed to tackle important challenges for the use of machine learning in atmospheric sciences is provided.

Free access
Terry L. Clark
,
William D. Hall
,
Robert M. Kerr
,
Don Middleton
,
Larry Radke
,
F. Martin Ralph
,
Paul J. Neiman
, and
David Levinson

Abstract

Results from numerical simulations of the Colorado Front Range downslope windstorm of 9 December 1992 are presented. Although this case was not characterized by severe surface winds, the event caused extreme clear-air turbulence (CAT) aloft, as indicated by the severe structural damage experienced by a DC-8 cargo jet at 9.7 km above mean sea level over the mountains. Detailed measurements from the National Oceanic and Atmospheric Administration/Environmental Research Laboratories/Environmental Technology Laboratory Doppler lidar and wind profilers operating on that day and from the Defense Meteorological Satellite Program satellite allow for a uniquely rich comparison between the simulations and observations.

Four levels of grid refinement were used in the model. The outer domain used National Centers for Environmental Prediction data for initial and boundary conditions. The finest grid used 200 m in all three dimensions over a 48 km by 48 km section. The range of resolution and domain coverage were sufficient to resolve the abundant variety of dynamics associated with a time-evolving windstorm forced during a frontal passage. This full range of resolution and model complexity was essential in this case. Many aspects of this windstorm are inherently three-dimensional and are not represented in idealized models using either 2D or so-called 2D–3D dynamics.

Both the timing and location of wave breaking compared well with observations. The model also reproduced cross-stream wavelike perturbations in the jet stream that compared well with the orientation and spacing of cloud bands observed by satellite and lidar. Model results also show that the observed CAT derives from interactions between these wavelike jet stream disturbances and mountain-forced internal gravity waves. Due to the nearly east–west orientation of the jet stream, these two interacting wave modes were orthogonal to each other. Thermal gradients associated with the intense jet stream undulations generated horizontal vortex tubes (HVTs) aligned with the mean flow. These HVTs remained aloft while they propagated downstream at about the elevation of the aircraft incident, and evidence for such a vortex was seen by the lidar. The model and observations suggest that one of these intense vortices may have caused the aircraft incident.

Reports of strong surface gusts were intermittent along the Front Range during the period of this study. The model showed that interactions between the gravity waves and flow-aligned jet stream undulations result in isolated occurrences of strong surface gusts in line with observations. The simulations show that strong shears on the upper and bottom surfaces of the jet stream combine to provide an episodic “downburst of turbulence.” In the present case, the perturbations of the jet stream provide a funnel-shaped shear zone aligned with the mean flow that acts as a guide for the downward transport of turbulence resulting from breaking gravity waves. The physical picture for the upper levels is similar to the surface gusts described by Clark and Farley resulting from vortex tilting. The CAT feeding into this funnel came from all surfaces of the jet stream with more than half originating from the vertically inclined shear zones on the bottom side of the jet stream. Visually the downburst of turbulence looks similar to a rain shaft plummeting to the surface and propagating out over the plains leaving relatively quiescent conditions behind.

Full access
Zafer Boybeyi
,
Nash'at N. Ahmad
,
David P. Bacon
,
Thomas J. Dunn
,
Mary S. Hall
,
Pius C. S. Lee
,
R. Ananthakrishna Sarma
, and
Tim R. Wait

Abstract

The Operational Multiscale Environment Model with Grid Adaptivity (OMEGA) is a multiscale nonhydrostatic atmospheric simulation system based on an adaptive unstructured grid. The basic philosophy behind the OMEGA development has been the creation of an operational tool for real-time aerosol and gas hazard prediction. The model development has been guided by two basic design considerations in order to meet the operational requirements: 1) the application of an unstructured dynamically adaptive mesh numerical technique to atmospheric simulation, and 2) the use of embedded atmospheric dispersion algorithms. An important step in proving the utility and accuracy of OMEGA is the full-scale testing of the model using simulations of real-world atmospheric events and qualitative as well as quantitative comparisons of the model results with observations. The main objective of this paper is to provide a comprehensive evaluation of OMEGA against a major dispersion experiment in operational mode. Therefore, OMEGA was run to create a 72-h forecast for the first release period (23–26 October 1994) of the European Tracer Experiment (ETEX). The predicted meteorological and dispersion fields were then evaluated against both the atmospheric observations and the ETEX dispersion measurements up to 60 h after the start of the release. In general, the evaluation showed that the OMEGA dispersion results were in good agreement in the position, shape, and extent of the tracer cloud. However, the model prediction indicated that there was a limited spreading of the predictions around the measurements with a small tendency to underestimate the concentration values.

Full access
Roy Rasmussen
,
Bruce Baker
,
John Kochendorfer
,
Tilden Meyers
,
Scott Landolt
,
Alexandre P. Fischer
,
Jenny Black
,
Julie M. Thériault
,
Paul Kucera
,
David Gochis
,
Craig Smith
,
Rodica Nitu
,
Mark Hall
,
Kyoko Ikeda
, and
Ethan Gutmann

This paper presents recent efforts to understand the relative accuracies of different instrumentation and gauges with various windshield configurations to measure snowfall. Results from the National Center for Atmospheric Research (NCAR) Marshall Field Site will be highlighted. This site hosts a test bed to assess various solid precipitation measurement techniques and is a joint collaboration between the National Oceanic and Atmospheric Administration (NOAA), NCAR, the National Weather Service (NWS), and Federal Aviation Administration (FAA). The collaboration involves testing new gauges and other solid precipitation measurement techniques in comparison with World Meteorological Organization (WMO) reference snowfall measurements. This assessment is critical for any ongoing studies and applications, such as climate monitoring and aircraft deicing, that rely on accurate and consistent precipitation measurements.

Full access
Naomi Goldenson
,
L. Ruby Leung
,
Linda O. Mearns
,
David W. Pierce
,
Kevin A. Reed
,
Isla R. Simpson
,
Paul Ullrich
,
Will Krantz
,
Alex Hall
,
Andrew Jones
, and
Stefan Rahimi

Abstract

Dynamical downscaling is a crucial process for providing regional climate information for broad uses, using coarser-resolution global models to drive higher-resolution regional climate simulations. The pool of global climate models (GCMs) providing the fields needed for dynamical downscaling has increased from the previous generations of the Coupled Model Intercomparison Project (CMIP). However, with limited computational resources, the need for prioritizing the GCMs for subsequent downscaling studies remains. GCM selection for dynamical downscaling should focus on evaluating processes relevant for providing boundary conditions to the regional models and be inspired by regional uses such as the response of extremes to changes in the boundary conditions. This leads to the need for metrics representing processes of relevance to diverse stakeholders and subregions of a domain. Procedures to account for metric redundancy and the statistical distinguishability of GCM rankings are required. Further, procedures for selecting realizations from ensembles of top-performing GCM simulations can be used to span the range of climate change signals in multiple ways. As a result, distinct weighting of metrics and prioritization of particular realizations may depend on user needs. We provide high-level guidelines for such region-specific evaluations and address how CMIP7 might enable dynamical downscaling of a representative sample of high-quality models across representative shared socioeconomic pathways (SSPs).

Open access
David P. Bacon
,
Nash’at N. Ahmad
,
Zafer Boybeyi
,
Thomas J. Dunn
,
Mary S. Hall
,
Pius C. S. Lee
,
R. Ananthakrishna Sarma
,
Mark D. Turner
,
Kenneth T. Waight III
,
Steve H. Young
, and
John W. Zack

Abstract

The Operational Multiscale Environment Model with Grid Adaptivity (OMEGA) and its embedded Atmospheric Dispersion Model is a new atmospheric simulation system for real-time hazard prediction, conceived out of a need to advance the state of the art in numerical weather prediction in order to improve the capability to predict the transport and diffusion of hazardous releases. OMEGA is based upon an unstructured grid that makes possible a continuously varying horizontal grid resolution ranging from 100 km down to 1 km and a vertical resolution from a few tens of meters in the boundary layer to 1 km in the free atmosphere. OMEGA is also naturally scale spanning because its unstructured grid permits the addition of grid elements at any point in space and time. In particular, unstructured grid cells in the horizontal dimension can increase local resolution to better capture topography or the important physical features of the atmospheric circulation and cloud dynamics. This means that OMEGA can readily adapt its grid to stationary surface or terrain features, or to dynamic features in the evolving weather pattern. While adaptive numerical techniques have yet to be extensively applied in atmospheric models, the OMEGA model is the first model to exploit the adaptive nature of an unstructured gridding technique for atmospheric simulation and hence real-time hazard prediction. The purpose of this paper is to provide a detailed description of the OMEGA model, the OMEGA system, and a detailed comparison of OMEGA forecast results with data.

Full access
S. G. Gopalakrishnan
,
David P. Bacon
,
Nash'at N. Ahmad
,
Zafer Boybeyi
,
Thomas J. Dunn
,
Mary S. Hall
,
Yi Jin
,
Pius C. S. Lee
,
Douglas E. Mays
,
Rangarao V. Madala
,
Ananthakrishna Sarma
,
Mark D. Turner
, and
Timothy R. Wait

Abstract

The Operational Multiscale Environment model with Grid Adaptivity (OMEGA) is an atmospheric simulation system that links the latest methods in computational fluid dynamics and high-resolution gridding technologies with numerical weather prediction. In the fall of 1999, OMEGA was used for the first time to examine the structure and evolution of a hurricane (Floyd, 1999). The first simulation of Floyd was conducted in an operational forecast mode; additional simulations exploiting both the static as well as the dynamic grid adaptation options in OMEGA were performed later as part of a sensitivity–capability study. While a horizontal grid resolution ranging from about 120 km down to about 40 km was employed in the operational run, resolutions down to about 15 km were used in the sensitivity study to explicitly model the structure of the inner core. All the simulations produced very similar storm tracks and reproduced the salient features of the observed storm such as the recurvature off the Florida coast with an average 48-h position error of 65 km. In addition, OMEGA predicted the landfall near Cape Fear, North Carolina, with an accuracy of less than 100 km up to 96 h in advance. It was found that a higher resolution in the eyewall region of the hurricane, provided by dynamic adaptation, was capable of generating better-organized cloud and flow fields and a well-defined eye with a central pressure lower than the environment by roughly 50 mb. Since that time, forecasts were performed for a number of other storms including Georges (1998) and six 2000 storms (Tropical Storms Beryl and Chris, Hurricanes Debby and Florence, Tropical Storm Helene, and Typhoon Xangsane). The OMEGA mean track error for all of these forecasts of 101, 140, and 298 km at 24, 48, and 72 h, respectively, represents a significant improvement over the National Hurricane Center (NHC) 1998 average of 156, 268, and 374 km, respectively. In a direct comparison with the GFDL model, OMEGA started with a considerably larger position error yet came within 5% of the GFDL 72-h track error. This paper details the simulations produced and documents the results, including a comparison of the OMEGA forecasts against satellite data, observed tracks, reported pressure lows and maximum wind speed, and the rainfall distribution over land.

Full access
Sandrine Bony
,
Robert Colman
,
Vladimir M. Kattsov
,
Richard P. Allan
,
Christopher S. Bretherton
,
Jean-Louis Dufresne
,
Alex Hall
,
Stephane Hallegatte
,
Marika M. Holland
,
William Ingram
,
David A. Randall
,
Brian J. Soden
,
George Tselioudis
, and
Mark J. Webb

Abstract

Processes in the climate system that can either amplify or dampen the climate response to an external perturbation are referred to as climate feedbacks. Climate sensitivity estimates depend critically on radiative feedbacks associated with water vapor, lapse rate, clouds, snow, and sea ice, and global estimates of these feedbacks differ among general circulation models. By reviewing recent observational, numerical, and theoretical studies, this paper shows that there has been progress since the Third Assessment Report of the Intergovernmental Panel on Climate Change in (i) the understanding of the physical mechanisms involved in these feedbacks, (ii) the interpretation of intermodel differences in global estimates of these feedbacks, and (iii) the development of methodologies of evaluation of these feedbacks (or of some components) using observations. This suggests that continuing developments in climate feedback research will progressively help make it possible to constrain the GCMs’ range of climate feedbacks and climate sensitivity through an ensemble of diagnostics based on physical understanding and observations.

Full access