Search Results

You are looking at 1 - 10 of 18 items for

  • Author or Editor: John Kennedy x
  • Refine by Access: All Content x
Clear All Modify Search
Gareth S. Jones
and
John J. Kennedy

Abstract

The impact of including comprehensive estimates of observational uncertainties on a detection and attribution analysis of twentieth-century near-surface temperature variations is investigated. The error model of HadCRUT4, a dataset of land near-surface air temperatures and sea surface temperatures, provides estimates of measurement, sampling, and bias adjustment uncertainties. These uncertainties are incorporated into an optimal detection analysis that regresses simulated large-scale temporal and spatial variations in near-surface temperatures, driven by well-mixed greenhouse gas variations and other anthropogenic and natural factors, against observed changes. The inclusion of bias adjustment uncertainties increases the variance of the regression scaling factors and the range of attributed warming from well-mixed greenhouse gases by less than 20%. Including estimates of measurement and sampling errors has a much smaller impact on the results. The range of attributable greenhouse gas warming is larger across analyses exploring dataset structural uncertainty. The impact of observational uncertainties on the detection analysis is found to be small compared to other sources of uncertainty, such as model variability and methodological choices, but it cannot be ruled out that on different spatial and temporal scales this source of uncertainty may be more important. The results support previous conclusions that there is a dominant anthropogenic greenhouse gas influence on twentieth-century near-surface temperature increases.

Full access
David W. J. Thompson
,
John M. Wallace
,
Phil D. Jones
, and
John J. Kennedy

Abstract

Global-mean surface temperature is affected by both natural variability and anthropogenic forcing. This study is concerned with identifying and removing from global-mean temperatures the signatures of natural climate variability over the period January 1900–March 2009. A series of simple, physically based methodologies are developed and applied to isolate the climate impacts of three known sources of natural variability: the El Niño–Southern Oscillation (ENSO), variations in the advection of marine air masses over the high-latitude continents during winter, and aerosols injected into the stratosphere by explosive volcanic eruptions. After the effects of ENSO and high-latitude temperature advection are removed from the global-mean temperature record, the signatures of volcanic eruptions and changes in instrumentation become more clearly apparent. After the volcanic eruptions are subsequently filtered from the record, the residual time series reveals a nearly monotonic global warming pattern since ∼1950. The results also reveal coupling between the land and ocean areas on the interannual time scale that transcends the effects of ENSO and volcanic eruptions. Globally averaged land and ocean temperatures are most strongly correlated when ocean leads land by ∼2–3 months. These coupled fluctuations exhibit a complicated spatial signature with largest-amplitude sea surface temperature perturbations over the Atlantic Ocean.

Full access
Luke L. B. Davis
,
David W. J. Thompson
,
John J. Kennedy
, and
Elizabeth C. Kent

Abstract

A new analysis of sea surface temperature (SST) observations indicates notable uncertainty in observed decadal climate variability in the second half of the twentieth century, particularly during the decades following World War II. The uncertainties are revealed by exploring SST data binned separately for the two predominant measurement types: “engine-room intake” (ERI) and “bucket” measurements. ERI measurements indicate large decreases in global-mean SSTs from 1950 to 1975, whereas bucket measurements indicate increases in SST over this period before bias adjustments are applied but decreases after they are applied. The trends in the bias adjustments applied to the bucket data are larger than the global-mean trends during the period 1950–75, and thus the global-mean trends during this period derive largely from the adjustments themselves. This is critical, since the adjustments are based on incomplete information about the underlying measurement methods and are thus subject to considerable uncertainty. The uncertainty in decadal-scale variability is particularly pronounced over the North Pacific, where the sign of low-frequency variability through the 1950s to 1970s is different for each measurement type. The uncertainty highlighted here has important—but in our view widely overlooked—implications for the interpretation of observed decadal climate variability over both the Pacific and Atlantic basins during the mid-to-late twentieth century.

Full access
Adam Vaccaro
,
Julien Emile-Geay
,
Dominque Guillot
,
Resherle Verna
,
Colin Morice
,
John Kennedy
, and
Bala Rajaratnam

Abstract

Surface temperature is a vital metric of Earth’s climate state but is incompletely observed in both space and time: over half of monthly values are missing from the widely used HadCRUT4.6 global surface temperature dataset. Here we apply the graphical expectation–maximization algorithm (GraphEM), a recently developed imputation method, to construct a spatially complete estimate of HadCRUT4.6 temperatures. GraphEM leverages Gaussian Markov random fields (also known as Gaussian graphical models) to better estimate covariance relationships within a climate field, detecting anisotropic features such as land–ocean contrasts, orography, ocean currents, and wave-propagation pathways. This detection leads to improved estimates of missing values compared to methods (such as kriging) that assume isotropic covariance relationships, as we show with real and synthetic data. This interpolated analysis of HadCRUT4.6 data is available as a 100-member ensemble, propagating information about sampling variability available from the original HadCRUT4.6 dataset. A comparison of Niño-3.4 and global mean monthly temperature series with published datasets reveals similarities and differences due in part to the spatial interpolation method. Notably, the GraphEM-completed HadCRUT4.6 global temperature displays a stronger early twenty-first-century warming trend than its uninterpolated counterpart, consistent with recent analyses using other datasets. Known events like the 1877/78 El Niño are recovered with greater fidelity than with kriging, and result in different assessments of changes in ENSO variability through time. Gaussian Markov random fields provide a more geophysically motivated way to impute missing values in climate fields, and the associated graph provides a powerful tool to analyze the structure of teleconnection patterns. We close with a discussion of wider applications of Markov random fields in climate science.

Full access
Brian C. Zachry
,
John L. Schroeder
,
Andrew B. Kennedy
,
Joannes J. Westerink
,
Chris W. Letchford
, and
Mark E. Hope

Abstract

Over the past decade, numerous field campaigns and laboratory experiments have examined air–sea momentum exchange in the deep ocean. These studies have changed the understanding of drag coefficient behavior in hurricane force winds, with a general consensus that a limiting value is reached. Near the shore, wave conditions are markedly different than in deep water because of wave shoaling and breaking processes, but only very limited data exist to assess drag coefficient behavior. Yet, knowledge of the wind stress in this region is critical for storm surge forecasting, evaluating the low-level wind field across the coastal transition zone, and informing the wind load standard along the hurricane-prone coastline. During Hurricane Ike (2008), a Texas Tech University StickNet platform obtained wind measurements in marine exposure with a fetch across the Houston ship channel. These data were used to estimate drag coefficient dependence on wind speed. Wave conditions in the ship channel and surge level at the StickNet location were simulated using the Simulating Waves Nearshore Model coupled to the Advanced Circulation Model. The simulated waves were indicative of a fetch-limited condition with maximum significant wave heights reaching 1.5 m and peak periods of 4 s. A maximum surge depth of 0.6 m inundated the StickNet. Similar to deep water studies, findings indicate that the drag coefficient reaches a limiting value at wind speeds near hurricane force. However, at wind speeds below hurricane force, the drag coefficient is higher than that of deep water datasets, particularly at the slowest wind speeds.

Full access

REFRACTT 2006

Real-Time Retrieval of High-Resolution, Low-Level Moisture Fields from Operational NEXRAD and Research Radars

Rita D. Roberts
,
Frédéric Fabry
,
Patrick C. Kennedy
,
Eric Nelson
,
James W. Wilson
,
Nancy Rehak
,
Jason Fritz
,
V. Chandrasekar
,
John Braun
,
Juanzhen Sun
,
Scott Ellis
,
Steven Reising
,
Timothy Crum
,
Larry Mooney
,
Robert Palmer
,
Tammy Weckwerth
, and
Sharmila Padmanabhan

The Refractivity Experiment for H2O Research and Collaborative Operational Technology Transfer (REFRACTT), conducted in northeast Colorado during the summer of 2006, provided a unique opportunity to obtain high-resolution gridded moisture fields from the operational Denver Next Generation Weather Radar (NEXRAD) and three research radars using a radar-based index of refraction (refractivity) technique. Until now, it has not been possible to observe and monitor moisture variability in the near-surface boundary layer to such high spatial (4-km horizontal gridpoint spacing) and temporal (4–10-min update rates) resolutions using operational NEXRAD and provide these moisture fields to researchers and the National Weather Service (NWS) forecasters in real time. The overarching goals of REFRACTT were to 1) access and mosaic the refractivity data from the operational NEXRAD and research radars together over a large domain for use by NWS forecasters in real time for short-term forecasting, 2) improve our understanding of near-surface water vapor variability and the role it plays in the initiation of convection and thunderstorms, and 3) improve the accuracy of quantitative precipitation forecasts (QPF) through improved observations and assimilation of low-level moisture fields. This paper presents examples of refractivity-derived moisture fields from REFRACTT in 2006 and the moisture variability observed in the near-surface boundary layer, in association with thunderstorm initiation, and with a cold frontal passage.

Full access
Stefan Brönnimann
,
Rob Allan
,
Christopher Atkinson
,
Roberto Buizza
,
Olga Bulygina
,
Per Dahlgren
,
Dick Dee
,
Robert Dunn
,
Pedro Gomes
,
Viju O. John
,
Sylvie Jourdain
,
Leopold Haimberger
,
Hans Hersbach
,
John Kennedy
,
Paul Poli
,
Jouni Pulliainen
,
Nick Rayner
,
Roger Saunders
,
Jörg Schulz
,
Alexander Sterin
,
Alexander Stickler
,
Holly Titchner
,
Maria Antonia Valente
,
Clara Ventura
, and
Clive Wilkinson

Abstract

Global dynamical reanalyses of the atmosphere and ocean fundamentally rely on observations, not just for the assimilation (i.e., for the definition of the state of the Earth system components) but also in many other steps along the production chain. Observations are used to constrain the model boundary conditions, for the calibration or uncertainty determination of other observations, and for the evaluation of data products. This requires major efforts, including data rescue (for historical observations), data management (including metadatabases), compilation and quality control, and error estimation. The work on observations ideally occurs one cycle ahead of the generation cycle of reanalyses, allowing the reanalyses to make full use of it. In this paper we describe the activities within ERA-CLIM2, which range from surface, upper-air, and Southern Ocean data rescue to satellite data recalibration and from the generation of snow-cover products to the development of a global station data metadatabase. The project has not produced new data collections. Rather, the data generated has fed into global repositories and will serve future reanalysis projects. The continuation of this effort is first contingent upon the organization of data rescue and also upon a series of targeted research activities to address newly identified in situ and satellite records.

Open access
Peter W. Thorne
,
Kate M. Willett
,
Rob J. Allan
,
Stephan Bojinski
,
John R. Christy
,
Nigel Fox
,
Simon Gilbert
,
Ian Jolliffe
,
John J. Kennedy
,
Elizabeth Kent
,
Albert Klein Tank
,
Jay Lawrimore
,
David E. Parker
,
Nick Rayner
,
Adrian Simmons
,
Lianchun Song
,
Peter A. Stott
, and
Blair Trewin

No abstract available.

Full access
Elizabeth C. Kent
,
John J. Kennedy
,
Thomas M. Smith
,
Shoji Hirahara
,
Boyin Huang
,
Alexey Kaplan
,
David E. Parker
,
Christopher P. Atkinson
,
David I. Berry
,
Giulia Carella
,
Yoshikazu Fukuda
,
Masayoshi Ishii
,
Philip D. Jones
,
Finn Lindgren
,
Christopher J. Merchant
,
Simone Morak-Bozzo
,
Nick A. Rayner
,
Victor Venema
,
Souichiro Yasui
, and
Huai-Min Zhang

Abstract

Global surface temperature changes are a fundamental expression of climate change. Recent, much-debated variations in the observed rate of surface temperature change have highlighted the importance of uncertainty in adjustments applied to sea surface temperature (SST) measurements. These adjustments are applied to compensate for systematic biases and changes in observing protocol. Better quantification of the adjustments and their uncertainties would increase confidence in estimated surface temperature change and provide higher-quality gridded SST fields for use in many applications.

Bias adjustments have been based on either physical models of the observing processes or the assumption of an unchanging relationship between SST and a reference dataset, such as night marine air temperature. These approaches produce similar estimates of SST bias on the largest space and time scales, but regional differences can exceed the estimated uncertainty. We describe challenges to improving our understanding of SST biases. Overcoming these will require clarification of past observational methods, improved modeling of biases associated with each observing method, and the development of statistical bias estimates that are less sensitive to the absence of metadata regarding the observing method.

New approaches are required that embed bias models, specific to each type of observation, within a robust statistical framework. Mobile platforms and rapid changes in observation type require biases to be assessed for individual historic and present-day platforms (i.e., ships or buoys) or groups of platforms. Lack of observational metadata and high-quality observations for validation and bias model development are likely to remain major challenges.

Open access
Adam J. Clark
,
Israel L. Jirak
,
Scott R. Dembek
,
Gerry J. Creager
,
Fanyou Kong
,
Kevin W. Thomas
,
Kent H. Knopfmeier
,
Burkely T. Gallo
,
Christopher J. Melick
,
Ming Xue
,
Keith A. Brewster
,
Youngsun Jung
,
Aaron Kennedy
,
Xiquan Dong
,
Joshua Markel
,
Matthew Gilmore
,
Glen S. Romine
,
Kathryn R. Fossell
,
Ryan A. Sobash
,
Jacob R. Carley
,
Brad S. Ferrier
,
Matthew Pyle
,
Curtis R. Alexander
,
Steven J. Weiss
,
John S. Kain
,
Louis J. Wicker
,
Gregory Thompson
,
Rebecca D. Adams-Selin
, and
David A. Imy

Abstract

One primary goal of annual Spring Forecasting Experiments (SFEs), which are coorganized by NOAA’s National Severe Storms Laboratory and Storm Prediction Center and conducted in the National Oceanic and Atmospheric Administration’s (NOAA) Hazardous Weather Testbed, is documenting performance characteristics of experimental, convection-allowing modeling systems (CAMs). Since 2007, the number of CAMs (including CAM ensembles) examined in the SFEs has increased dramatically, peaking at six different CAM ensembles in 2015. Meanwhile, major advances have been made in creating, importing, processing, verifying, and developing tools for analyzing and visualizing these large and complex datasets. However, progress toward identifying optimal CAM ensemble configurations has been inhibited because the different CAM systems have been independently designed, making it difficult to attribute differences in performance characteristics. Thus, for the 2016 SFE, a much more coordinated effort among many collaborators was made by agreeing on a set of model specifications (e.g., model version, grid spacing, domain size, and physics) so that the simulations contributed by each collaborator could be combined to form one large, carefully designed ensemble known as the Community Leveraged Unified Ensemble (CLUE). The 2016 CLUE was composed of 65 members contributed by five research institutions and represents an unprecedented effort to enable an evidence-driven decision process to help guide NOAA’s operational modeling efforts. Eight unique experiments were designed within the CLUE framework to examine issues directly relevant to the design of NOAA’s future operational CAM-based ensembles. This article will highlight the CLUE design and present results from one of the experiments examining the impact of single versus multicore CAM ensemble configurations.

Full access