Search Results
You are looking at 1 - 9 of 9 items for
- Author or Editor: B. Soden x
- Refine by Access: All Content x
Abstract
This paper presents a quantitative methodology for evaluating air–sea fluxes related to ENSO from different atmospheric products. A statistical model of the fluxes from each atmospheric product is coupled to an ocean general circulation model (GCM). Four different products are evaluated: reanalyses from the National Centers for Environmental Prediction (NCEP) and the European Centre for Medium-Range Weather Forecasts (ECMWF), satellite-derived data from the Special Sensor Microwave/Imaging (SSM/I) platform and the International Satellite Cloud Climatology Project (ISCCP), and an atmospheric GCM developed at the Geophysical Fluid Dynamics Laboratory (GFDL) as part of the Atmospheric Model Intercomparison Project (AMIP) II. For this study, comparisons between the datasets are restricted to the dominant air–sea mode.
The stability of a coupled model using only the dominant mode and the associated predictive skill of the model are strongly dependent on which atmospheric product is used. The model is unstable and oscillatory for the ECMWF product, damped and oscillatory for the NCEP and GFDL products, and unstable (nonoscillatory) for the satellite product. The ocean model is coupled with patterns of wind stress as well as heat fluxes. This distinguishes the present approach from the existing paradigm for ENSO models where surface heat fluxes are parameterized as a local damping term in the sea surface temperature (SST) equation.
Abstract
This paper presents a quantitative methodology for evaluating air–sea fluxes related to ENSO from different atmospheric products. A statistical model of the fluxes from each atmospheric product is coupled to an ocean general circulation model (GCM). Four different products are evaluated: reanalyses from the National Centers for Environmental Prediction (NCEP) and the European Centre for Medium-Range Weather Forecasts (ECMWF), satellite-derived data from the Special Sensor Microwave/Imaging (SSM/I) platform and the International Satellite Cloud Climatology Project (ISCCP), and an atmospheric GCM developed at the Geophysical Fluid Dynamics Laboratory (GFDL) as part of the Atmospheric Model Intercomparison Project (AMIP) II. For this study, comparisons between the datasets are restricted to the dominant air–sea mode.
The stability of a coupled model using only the dominant mode and the associated predictive skill of the model are strongly dependent on which atmospheric product is used. The model is unstable and oscillatory for the ECMWF product, damped and oscillatory for the NCEP and GFDL products, and unstable (nonoscillatory) for the satellite product. The ocean model is coupled with patterns of wind stress as well as heat fluxes. This distinguishes the present approach from the existing paradigm for ENSO models where surface heat fluxes are parameterized as a local damping term in the sea surface temperature (SST) equation.
Abstract
We describe the historical evolution of the conceptualization, formulation, quantification, application, and utilization of “radiative forcing” (RF) of Earth’s climate. Basic theories of shortwave and longwave radiation were developed through the nineteenth and twentieth centuries and established the analytical framework for defining and quantifying the perturbations to Earth’s radiative energy balance by natural and anthropogenic influences. The insight that Earth’s climate could be radiatively forced by changes in carbon dioxide, first introduced in the nineteenth century, gained empirical support with sustained observations of the atmospheric concentrations of the gas beginning in 1957. Advances in laboratory and field measurements, theory, instrumentation, computational technology, data, and analysis of well-mixed greenhouse gases and the global climate system through the twentieth century enabled the development and formalism of RF; this allowed RF to be related to changes in global-mean surface temperature with the aid of increasingly sophisticated models. This in turn led to RF becoming firmly established as a principal concept in climate science by 1990. The linkage with surface temperature has proven to be the most important application of the RF concept, enabling a simple metric to evaluate the relative climate impacts of different agents. The late 1970s and 1980s saw accelerated developments in quantification, including the first assessment of the effect of the forcing due to the doubling of carbon dioxide on climate (the “Charney” report). The concept was subsequently extended to a wide variety of agents beyond well-mixed greenhouse gases (WMGHGs; carbon dioxide, methane, nitrous oxide, and halocarbons) to short-lived species such as ozone. The WMO and IPCC international assessments began the important sequence of periodic evaluations and quantifications of the forcings by natural (solar irradiance changes and stratospheric aerosols resulting from volcanic eruptions) and a growing set of anthropogenic agents (WMGHGs, ozone, aerosols, land surface changes, contrails). From the 1990s to the present, knowledge and scientific confidence in the radiative agents acting on the climate system have proliferated. The conceptual basis of RF has also evolved as both our understanding of the way radiative forcing drives climate change and the diversity of the forcing mechanisms have grown. This has led to the current situation where “effective radiative forcing” (ERF) is regarded as the preferred practical definition of radiative forcing in order to better capture the link between forcing and global-mean surface temperature change. The use of ERF, however, comes with its own attendant issues, including challenges in its diagnosis from climate models, its applications to small forcings, and blurring of the distinction between rapid climate adjustments (fast responses) and climate feedbacks; this will necessitate further elaboration of its utility in the future. Global climate model simulations of radiative perturbations by various agents have established how the forcings affect other climate variables besides temperature (e.g., precipitation). The forcing–response linkage as simulated by models, including the diversity in the spatial distribution of forcings by the different agents, has provided a practical demonstration of the effectiveness of agents in perturbing the radiative energy balance and causing climate changes. The significant advances over the past half century have established, with very high confidence, that the global-mean ERF due to human activity since preindustrial times is positive (the 2013 IPCC assessment gives a best estimate of 2.3 W m−2, with a range from 1.1 to 3.3 W m−2; 90% confidence interval). Further, except in the immediate aftermath of climatically significant volcanic eruptions, the net anthropogenic forcing dominates over natural radiative forcing mechanisms. Nevertheless, the substantial remaining uncertainty in the net anthropogenic ERF leads to large uncertainties in estimates of climate sensitivity from observations and in predicting future climate impacts. The uncertainty in the ERF arises principally from the incorporation of the rapid climate adjustments in the formulation, the well-recognized difficulties in characterizing the preindustrial state of the atmosphere, and the incomplete knowledge of the interactions of aerosols with clouds. This uncertainty impairs the quantitative evaluation of climate adaptation and mitigation pathways in the future. A grand challenge in Earth system science lies in continuing to sustain the relatively simple essence of the radiative forcing concept in a form similar to that originally devised, and at the same time improving the quantification of the forcing. This, in turn, demands an accurate, yet increasingly complex and comprehensive, accounting of the relevant processes in the climate system.
Abstract
We describe the historical evolution of the conceptualization, formulation, quantification, application, and utilization of “radiative forcing” (RF) of Earth’s climate. Basic theories of shortwave and longwave radiation were developed through the nineteenth and twentieth centuries and established the analytical framework for defining and quantifying the perturbations to Earth’s radiative energy balance by natural and anthropogenic influences. The insight that Earth’s climate could be radiatively forced by changes in carbon dioxide, first introduced in the nineteenth century, gained empirical support with sustained observations of the atmospheric concentrations of the gas beginning in 1957. Advances in laboratory and field measurements, theory, instrumentation, computational technology, data, and analysis of well-mixed greenhouse gases and the global climate system through the twentieth century enabled the development and formalism of RF; this allowed RF to be related to changes in global-mean surface temperature with the aid of increasingly sophisticated models. This in turn led to RF becoming firmly established as a principal concept in climate science by 1990. The linkage with surface temperature has proven to be the most important application of the RF concept, enabling a simple metric to evaluate the relative climate impacts of different agents. The late 1970s and 1980s saw accelerated developments in quantification, including the first assessment of the effect of the forcing due to the doubling of carbon dioxide on climate (the “Charney” report). The concept was subsequently extended to a wide variety of agents beyond well-mixed greenhouse gases (WMGHGs; carbon dioxide, methane, nitrous oxide, and halocarbons) to short-lived species such as ozone. The WMO and IPCC international assessments began the important sequence of periodic evaluations and quantifications of the forcings by natural (solar irradiance changes and stratospheric aerosols resulting from volcanic eruptions) and a growing set of anthropogenic agents (WMGHGs, ozone, aerosols, land surface changes, contrails). From the 1990s to the present, knowledge and scientific confidence in the radiative agents acting on the climate system have proliferated. The conceptual basis of RF has also evolved as both our understanding of the way radiative forcing drives climate change and the diversity of the forcing mechanisms have grown. This has led to the current situation where “effective radiative forcing” (ERF) is regarded as the preferred practical definition of radiative forcing in order to better capture the link between forcing and global-mean surface temperature change. The use of ERF, however, comes with its own attendant issues, including challenges in its diagnosis from climate models, its applications to small forcings, and blurring of the distinction between rapid climate adjustments (fast responses) and climate feedbacks; this will necessitate further elaboration of its utility in the future. Global climate model simulations of radiative perturbations by various agents have established how the forcings affect other climate variables besides temperature (e.g., precipitation). The forcing–response linkage as simulated by models, including the diversity in the spatial distribution of forcings by the different agents, has provided a practical demonstration of the effectiveness of agents in perturbing the radiative energy balance and causing climate changes. The significant advances over the past half century have established, with very high confidence, that the global-mean ERF due to human activity since preindustrial times is positive (the 2013 IPCC assessment gives a best estimate of 2.3 W m−2, with a range from 1.1 to 3.3 W m−2; 90% confidence interval). Further, except in the immediate aftermath of climatically significant volcanic eruptions, the net anthropogenic forcing dominates over natural radiative forcing mechanisms. Nevertheless, the substantial remaining uncertainty in the net anthropogenic ERF leads to large uncertainties in estimates of climate sensitivity from observations and in predicting future climate impacts. The uncertainty in the ERF arises principally from the incorporation of the rapid climate adjustments in the formulation, the well-recognized difficulties in characterizing the preindustrial state of the atmosphere, and the incomplete knowledge of the interactions of aerosols with clouds. This uncertainty impairs the quantitative evaluation of climate adaptation and mitigation pathways in the future. A grand challenge in Earth system science lies in continuing to sustain the relatively simple essence of the radiative forcing concept in a form similar to that originally devised, and at the same time improving the quantification of the forcing. This, in turn, demands an accurate, yet increasingly complex and comprehensive, accounting of the relevant processes in the climate system.
Abstract
The climate response to idealized changes in the atmospheric CO2 concentration by the new GFDL climate model (CM2) is documented. This new model is very different from earlier GFDL models in its parameterizations of subgrid-scale physical processes, numerical algorithms, and resolution. The model was constructed to be useful for both seasonal-to-interannual predictions and climate change research. Unlike previous versions of the global coupled GFDL climate models, CM2 does not use flux adjustments to maintain a stable control climate. Results from two model versions, Climate Model versions 2.0 (CM2.0) and 2.1 (CM2.1), are presented.
Two atmosphere–mixed layer ocean or slab models, Slab Model versions 2.0 (SM2.0) and 2.1 (SM2.1), are constructed corresponding to CM2.0 and CM2.1. Using the SM2 models to estimate the climate sensitivity, it is found that the equilibrium globally averaged surface air temperature increases 2.9 (SM2.0) and 3.4 K (SM2.1) for a doubling of the atmospheric CO2 concentration. When forced by a 1% per year CO2 increase, the surface air temperature difference around the time of CO2 doubling [transient climate response (TCR)] is about 1.6 K for both coupled model versions (CM2.0 and CM2.1). The simulated warming is near the median of the responses documented for the climate models used in the 2001 Intergovernmental Panel on Climate Change (IPCC) Working Group I Third Assessment Report (TAR).
The thermohaline circulation (THC) weakened in response to increasing atmospheric CO2. By the time of CO2 doubling, the weakening in CM2.1 is larger than that found in CM2.0: 7 and 4 Sv (1 Sv ≡ 106 m3 s−1), respectively. However, the THC in the control integration of CM2.1 is stronger than in CM2.0, so that the percentage change in the THC between the two versions is more similar. The average THC change for the models presented in the TAR is about 3 or 4 Sv; however, the range across the model results is very large, varying from a slight increase (+2 Sv) to a large decrease (−10 Sv).
Abstract
The climate response to idealized changes in the atmospheric CO2 concentration by the new GFDL climate model (CM2) is documented. This new model is very different from earlier GFDL models in its parameterizations of subgrid-scale physical processes, numerical algorithms, and resolution. The model was constructed to be useful for both seasonal-to-interannual predictions and climate change research. Unlike previous versions of the global coupled GFDL climate models, CM2 does not use flux adjustments to maintain a stable control climate. Results from two model versions, Climate Model versions 2.0 (CM2.0) and 2.1 (CM2.1), are presented.
Two atmosphere–mixed layer ocean or slab models, Slab Model versions 2.0 (SM2.0) and 2.1 (SM2.1), are constructed corresponding to CM2.0 and CM2.1. Using the SM2 models to estimate the climate sensitivity, it is found that the equilibrium globally averaged surface air temperature increases 2.9 (SM2.0) and 3.4 K (SM2.1) for a doubling of the atmospheric CO2 concentration. When forced by a 1% per year CO2 increase, the surface air temperature difference around the time of CO2 doubling [transient climate response (TCR)] is about 1.6 K for both coupled model versions (CM2.0 and CM2.1). The simulated warming is near the median of the responses documented for the climate models used in the 2001 Intergovernmental Panel on Climate Change (IPCC) Working Group I Third Assessment Report (TAR).
The thermohaline circulation (THC) weakened in response to increasing atmospheric CO2. By the time of CO2 doubling, the weakening in CM2.1 is larger than that found in CM2.0: 7 and 4 Sv (1 Sv ≡ 106 m3 s−1), respectively. However, the THC in the control integration of CM2.1 is stronger than in CM2.0, so that the percentage change in the THC between the two versions is more similar. The average THC change for the models presented in the TAR is about 3 or 4 Sv; however, the range across the model results is very large, varying from a slight increase (+2 Sv) to a large decrease (−10 Sv).
An intercomparison of radiation codes used in retrieving upper-tropospheric humidity (UTH) from observations in the ν2 (6.3 μm) water vapor absorption band was performed. This intercomparison is one part of a coordinated effort within the Global Energy and Water Cycle Experiment Water Vapor Project to assess our ability to monitor the distribution and variations of upper-tropospheric moisture from spaceborne sensors. A total of 23 different codes, ranging from detailed line-by-line (LBL) models, to coarser-resolution narrowband (NB) models, to highly parameterized single-band (SB) models participated in the study. Forward calculations were performed using a carefully selected set of temperature and moisture profiles chosen to be representative of a wide range of atmospheric conditions. The LBL model calculations exhibited the greatest consistency with each other, typically agreeing to within 0.5 K in terms of the equivalent blackbody brightness temperature (Tb ). The majority of NB and SB models agreed to within ±1 K of the LBL models, although a few older models exhibited systematic Tb biases in excess of 2 K. A discussion of the discrepancies between various models, their association with differences in model physics (e.g., continuum absorption), and their implications for UTH retrieval and radiance assimilation is presented.
An intercomparison of radiation codes used in retrieving upper-tropospheric humidity (UTH) from observations in the ν2 (6.3 μm) water vapor absorption band was performed. This intercomparison is one part of a coordinated effort within the Global Energy and Water Cycle Experiment Water Vapor Project to assess our ability to monitor the distribution and variations of upper-tropospheric moisture from spaceborne sensors. A total of 23 different codes, ranging from detailed line-by-line (LBL) models, to coarser-resolution narrowband (NB) models, to highly parameterized single-band (SB) models participated in the study. Forward calculations were performed using a carefully selected set of temperature and moisture profiles chosen to be representative of a wide range of atmospheric conditions. The LBL model calculations exhibited the greatest consistency with each other, typically agreeing to within 0.5 K in terms of the equivalent blackbody brightness temperature (Tb ). The majority of NB and SB models agreed to within ±1 K of the LBL models, although a few older models exhibited systematic Tb biases in excess of 2 K. A discussion of the discrepancies between various models, their association with differences in model physics (e.g., continuum absorption), and their implications for UTH retrieval and radiance assimilation is presented.
Abstract
Observations from a wide variety of instruments and platforms are used to validate many different aspects of a three-dimensional mesoscale simulation of the dynamics, cloud microphysics, and radiative transfer of a cirrus cloud system observed on 26 November 1991 during the second cirrus field program of the First International Satellite Cloud Climatology Program (ISCCP) Regional Experiment (FIRE-II) located in southeastern Kansas. The simulation was made with a mesoscale dynamical model utilizing a simplified bulk water cloud scheme and a spectral model of radiative transfer. Expressions for cirrus optical properties for solar and infrared wavelength intervals as functions of ice water content and effective particle radius are modified for the midlatitude cirrus observed during FIRE-II and are shown to compare favorably with explicit size-resolving calculations of the optical properties. Rawinsonde, Raman lidar, and satellite data are evaluated and combined to produce a time–height cross section of humidity at the central FIRE-II site for model verification. Due to the wide spacing of rawinsondes and their infrequent release, important moisture features go undetected and are absent in the conventional analyses. The upper-tropospheric humidities used for the initial conditions were generally less than 50% of those inferred from satellite data, yet over the course of a 24-h simulation the model produced a distribution that closely resembles the large-scale features of the satellite analysis. The simulated distribution and concentration of ice compares favorably with data from radar, lidar, satellite, and aircraft. Direct comparison is made between the radiative transfer simulation and data from broadband and spectral sensors and inferred quantities such as cloud albedo, optical depth, and top-of-the-atmosphere 11-µm brightness temperature, and the 6.7-µm brightness temperature. Comparison is also made with theoretical heating rates calculated using the rawinsonde data and measured ice water size distributions near the central site. For this case study, and perhaps for most other mesoscale applications, the differences between the observed and simulated radiative quantities are due more to errors in the prediction of ice water content, than to errors in the optical properties or the radiative transfer solution technique.
Abstract
Observations from a wide variety of instruments and platforms are used to validate many different aspects of a three-dimensional mesoscale simulation of the dynamics, cloud microphysics, and radiative transfer of a cirrus cloud system observed on 26 November 1991 during the second cirrus field program of the First International Satellite Cloud Climatology Program (ISCCP) Regional Experiment (FIRE-II) located in southeastern Kansas. The simulation was made with a mesoscale dynamical model utilizing a simplified bulk water cloud scheme and a spectral model of radiative transfer. Expressions for cirrus optical properties for solar and infrared wavelength intervals as functions of ice water content and effective particle radius are modified for the midlatitude cirrus observed during FIRE-II and are shown to compare favorably with explicit size-resolving calculations of the optical properties. Rawinsonde, Raman lidar, and satellite data are evaluated and combined to produce a time–height cross section of humidity at the central FIRE-II site for model verification. Due to the wide spacing of rawinsondes and their infrequent release, important moisture features go undetected and are absent in the conventional analyses. The upper-tropospheric humidities used for the initial conditions were generally less than 50% of those inferred from satellite data, yet over the course of a 24-h simulation the model produced a distribution that closely resembles the large-scale features of the satellite analysis. The simulated distribution and concentration of ice compares favorably with data from radar, lidar, satellite, and aircraft. Direct comparison is made between the radiative transfer simulation and data from broadband and spectral sensors and inferred quantities such as cloud albedo, optical depth, and top-of-the-atmosphere 11-µm brightness temperature, and the 6.7-µm brightness temperature. Comparison is also made with theoretical heating rates calculated using the rawinsonde data and measured ice water size distributions near the central site. For this case study, and perhaps for most other mesoscale applications, the differences between the observed and simulated radiative quantities are due more to errors in the prediction of ice water content, than to errors in the optical properties or the radiative transfer solution technique.
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5–50 μm), the spectrum of solar radiation reflected by the Earth and its atmosphere (320–2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a “NIST [National Institute of Standards and Technology] in orbit.” CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5–50 μm), the spectrum of solar radiation reflected by the Earth and its atmosphere (320–2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a “NIST [National Institute of Standards and Technology] in orbit.” CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.