Search Results
You are looking at 1 - 10 of 12 items for
- Author or Editor: Melissa Free x
- Refine by Access: All Content x
Abstract
Radiosonde data show a large seasonal difference in trends since 1979 in the tropical lower stratosphere, with a maximum cooling of ∼1 K decade−1 in December and January and a minimum in March or April at 50 mb between 10°N and 10°S. The statistically significant difference of up to ∼1 K decade−1 between trends in December and those in March amounts to up to 20% of the climatological seasonal cycle. Although the size of annual mean cooling trends differs substantially among datasets, the seasonal pattern of trends is similar in all six radiosonde datasets used here and is consistent with MSU satellite data for the lower stratosphere. This greater cooling in boreal winter essentially disappears below 100 mb, and the troposphere has a different and smaller seasonal trend pattern.
Trends in the tropical stratosphere show an inverse relationship with those in the Arctic for 1979–2009, which might be related to changes in stratospheric circulation. In most radiosonde data, however, the seasonal pattern of tropical trends at 50 mb since 1979 seems to come from a seasonal difference in the size of the stratospheric cooling in the mid-1990s, and trends for longer time periods or those for 1995–2009 do not show the same seasonal dependence. Whether the strengthening of the seasonal cycle in the stratosphere represents a long-term change related to greenhouse gas forcing, a shorter-lived shift related to ozone depletion or unforced interdecadal variability requires careful further study.
Abstract
Radiosonde data show a large seasonal difference in trends since 1979 in the tropical lower stratosphere, with a maximum cooling of ∼1 K decade−1 in December and January and a minimum in March or April at 50 mb between 10°N and 10°S. The statistically significant difference of up to ∼1 K decade−1 between trends in December and those in March amounts to up to 20% of the climatological seasonal cycle. Although the size of annual mean cooling trends differs substantially among datasets, the seasonal pattern of trends is similar in all six radiosonde datasets used here and is consistent with MSU satellite data for the lower stratosphere. This greater cooling in boreal winter essentially disappears below 100 mb, and the troposphere has a different and smaller seasonal trend pattern.
Trends in the tropical stratosphere show an inverse relationship with those in the Arctic for 1979–2009, which might be related to changes in stratospheric circulation. In most radiosonde data, however, the seasonal pattern of tropical trends at 50 mb since 1979 seems to come from a seasonal difference in the size of the stratospheric cooling in the mid-1990s, and trends for longer time periods or those for 1995–2009 do not show the same seasonal dependence. Whether the strengthening of the seasonal cycle in the stratosphere represents a long-term change related to greenhouse gas forcing, a shorter-lived shift related to ozone depletion or unforced interdecadal variability requires careful further study.
Abstract
Both observed and modeled upper-air temperature profiles show the tropospheric cooling and tropical stratospheric warming effects from the three major volcanic eruptions since 1960. Detailed comparisons of vertical profiles of Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) and Hadley Centre Atmospheric Temperatures, version 2 (HadAT2), radiosonde temperatures with output from six coupled GCMs show good overall agreement on the responses to the 1991 Mount Pinatubo and 1982 El Chichón eruptions in the troposphere and stratosphere, with a tendency of the models to underestimate the upper-tropospheric cooling and overestimate the stratospheric warming relative to observations. The cooling effect at the surface in the tropics is amplified with altitude in the troposphere in both observations and models, but this amplification is greater for the observations than for the models. Models and observations show a large disagreement around 100 mb for Mount Pinatubo in the tropics, where observations show essentially no change, while models show significant warming of ∼0.7 to ∼2.6 K. This difference occurs even in models that accurately simulate stratospheric warming at 50 mb. Overall, the Parallel Climate Model is an outlier in that it simulates more volcanic-induced stratospheric warming than both the other models and the observations in most cases.
From 1979 to 1999 in the tropics, RATPAC shows a trend of less than 0.1 K decade−1 at and above 300 mb before volcanic effects are removed, while the mean of the models used here has a trend of more than 0.3 K decade−1, giving a difference of ∼0.2 K decade−1. At 300 mb, from 0.02 to 0.10 K decade−1 of this difference may be due to the influence of volcanic eruptions, with the smaller estimate appearing more likely than the larger. No more than ∼0.03 K of the ∼0.1-K difference in trends between the surface and troposphere at 700 mb or below in the radiosonde data appears to be due to volcanic effects.
Abstract
Both observed and modeled upper-air temperature profiles show the tropospheric cooling and tropical stratospheric warming effects from the three major volcanic eruptions since 1960. Detailed comparisons of vertical profiles of Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) and Hadley Centre Atmospheric Temperatures, version 2 (HadAT2), radiosonde temperatures with output from six coupled GCMs show good overall agreement on the responses to the 1991 Mount Pinatubo and 1982 El Chichón eruptions in the troposphere and stratosphere, with a tendency of the models to underestimate the upper-tropospheric cooling and overestimate the stratospheric warming relative to observations. The cooling effect at the surface in the tropics is amplified with altitude in the troposphere in both observations and models, but this amplification is greater for the observations than for the models. Models and observations show a large disagreement around 100 mb for Mount Pinatubo in the tropics, where observations show essentially no change, while models show significant warming of ∼0.7 to ∼2.6 K. This difference occurs even in models that accurately simulate stratospheric warming at 50 mb. Overall, the Parallel Climate Model is an outlier in that it simulates more volcanic-induced stratospheric warming than both the other models and the observations in most cases.
From 1979 to 1999 in the tropics, RATPAC shows a trend of less than 0.1 K decade−1 at and above 300 mb before volcanic effects are removed, while the mean of the models used here has a trend of more than 0.3 K decade−1, giving a difference of ∼0.2 K decade−1. At 300 mb, from 0.02 to 0.10 K decade−1 of this difference may be due to the influence of volcanic eruptions, with the smaller estimate appearing more likely than the larger. No more than ∼0.03 K of the ∼0.1-K difference in trends between the surface and troposphere at 700 mb or below in the radiosonde data appears to be due to volcanic effects.
Abstract
This paper presents evidence of significant discontinuities in U.S. cloud cover data from the Integrated Surface Database (ISD) and its predecessor datasets. While long-term U.S. cloud records have some well-known homogeneity problems related to the introduction of the Automated Surface Observing System (ASOS) in the 1990s, the change to the international standard reporting format [aviation routine weather report (METAR)] in the United States in July 1996 introduces an additional inhomogeneity at many of the stations where humans still make or supplement cloud observations. This change is associated with an upward shift in total cloud of 0.1%–10%, statistically significant at 95 of 172 stations. The shift occurs at both National Weather Service and military weather stations, producing a mean increase in total cloud of 2%–3%. This suggests that the positive trends in U.S. cloud cover reported by other researchers for recent time periods may be exaggerated, a conclusion that is supported by comparisons with precipitation and diurnal temperature range data.
Additional discontinuities exist at other times in the frequency distributions of fractional cloud cover at the majority of stations, many of which may be explained by changes in the sources and types of data included in ISD. Some of these result in noticeable changes in monthly-mean total cloud. The current U.S. cloud cover database needs thorough homogeneity testing and adjustment before it can be used with confidence for trend assessment or satellite product validation.
Abstract
This paper presents evidence of significant discontinuities in U.S. cloud cover data from the Integrated Surface Database (ISD) and its predecessor datasets. While long-term U.S. cloud records have some well-known homogeneity problems related to the introduction of the Automated Surface Observing System (ASOS) in the 1990s, the change to the international standard reporting format [aviation routine weather report (METAR)] in the United States in July 1996 introduces an additional inhomogeneity at many of the stations where humans still make or supplement cloud observations. This change is associated with an upward shift in total cloud of 0.1%–10%, statistically significant at 95 of 172 stations. The shift occurs at both National Weather Service and military weather stations, producing a mean increase in total cloud of 2%–3%. This suggests that the positive trends in U.S. cloud cover reported by other researchers for recent time periods may be exaggerated, a conclusion that is supported by comparisons with precipitation and diurnal temperature range data.
Additional discontinuities exist at other times in the frequency distributions of fractional cloud cover at the majority of stations, many of which may be explained by changes in the sources and types of data included in ISD. Some of these result in noticeable changes in monthly-mean total cloud. The current U.S. cloud cover database needs thorough homogeneity testing and adjustment before it can be used with confidence for trend assessment or satellite product validation.
Abstract
Cloud cover data from ground-based weather observers can be an important source of climate information, but the record of such observations in the United States is disrupted by the introduction of automated observing systems and other artificial shifts that interfere with our ability to assess changes in cloudiness at climate time scales. A new dataset using 54 National Weather Service (NWS) and 101 military stations that continued to make human-augmented cloud observations after the 1990s has been adjusted using statistical changepoint detection and visual scrutiny. The adjustments substantially reduce the trends in U.S. mean total cloud cover while increasing the agreement between the cloud cover time series and those of physically related climate variables. For 1949–2009, the adjusted time series give a trend in U.S. mean total cloud of 0.11% ± 0.22% decade−1 for the military data, 0.55% ± 0.24% decade−1 for the NWS data, and 0.31% ± 0.22% decade−1 for the combined dataset. These trends are less than one-half of those in the original data. For 1976–2004, the original data give a significant increase but the adjusted data show an insignificant trend from −0.17% decade−1 (military stations) to 0.66% decade−1 (NWS stations). Trends have notable regional variability, with the northwest United States showing declining total cloud cover for all time periods examined, while trends for most other regions are positive. Differences between trends in the adjusted datasets from military stations and NWS stations may be rooted in the difference in data source and reflect the uncertainties in the homogeneity adjustment process.
Abstract
Cloud cover data from ground-based weather observers can be an important source of climate information, but the record of such observations in the United States is disrupted by the introduction of automated observing systems and other artificial shifts that interfere with our ability to assess changes in cloudiness at climate time scales. A new dataset using 54 National Weather Service (NWS) and 101 military stations that continued to make human-augmented cloud observations after the 1990s has been adjusted using statistical changepoint detection and visual scrutiny. The adjustments substantially reduce the trends in U.S. mean total cloud cover while increasing the agreement between the cloud cover time series and those of physically related climate variables. For 1949–2009, the adjusted time series give a trend in U.S. mean total cloud of 0.11% ± 0.22% decade−1 for the military data, 0.55% ± 0.24% decade−1 for the NWS data, and 0.31% ± 0.22% decade−1 for the combined dataset. These trends are less than one-half of those in the original data. For 1976–2004, the original data give a significant increase but the adjusted data show an insignificant trend from −0.17% decade−1 (military stations) to 0.66% decade−1 (NWS stations). Trends have notable regional variability, with the northwest United States showing declining total cloud cover for all time periods examined, while trends for most other regions are positive. Differences between trends in the adjusted datasets from military stations and NWS stations may be rooted in the difference in data source and reflect the uncertainties in the homogeneity adjustment process.
Abstract
Using a reanalysis of the climate of the past half century as a model of temperature variations over the next half century, tests of various data collection protocols are made to develop recommendations for observing system requirements for monitoring upper-air temperature. The analysis focuses on accurately estimating monthly climatic data (specifically, monthly average temperature and its standard deviation) and multidecadal trends in monthly temperatures at specified locations, from the surface to 30 hPa. It does not address upper-air network size or station location issues.
The effects of reducing the precision of temperature data, incomplete sampling of the diurnal cycle, incomplete sampling of the days of the month, imperfect long-term stability of the observations, and changes in observation schedule are assessed. To ensure accurate monthly climate statistics, observations with at least 0.5-K precision, made at least twice daily, at least once every two or three days are sufficient. Using these same criteria, and maintaining long-term measurement stability to within 0.25 (0.1) K, for periods of 20 to 50 yr, errors in trend estimates can be avoided in at least 90% (95%) of cases. In practical terms, this requires no more than one intervention (e.g., instrument change) over the period of record, and its effect must be to change the measurement bias by no more than 0.25 (0.1) K. The effect of the first intervention dominates the effects of subsequent, uncorrelated interventions. Changes in observation schedule also affect trend estimates. Reducing the number of observations per day, or changing the timing of a single observation per day, has a greater potential to produce errors in trends than reducing the number of days per month on which observations are made.
These findings depend on the validity of using reanalysis data to approximate the statistical nature of future climate variations, and on the statistical tests employed. However, the results are based on conservative assumptions, so that adopting observing system requirements based on this analysis should result in a data archive that will meet climate monitoring needs over the next 50 yr.
Abstract
Using a reanalysis of the climate of the past half century as a model of temperature variations over the next half century, tests of various data collection protocols are made to develop recommendations for observing system requirements for monitoring upper-air temperature. The analysis focuses on accurately estimating monthly climatic data (specifically, monthly average temperature and its standard deviation) and multidecadal trends in monthly temperatures at specified locations, from the surface to 30 hPa. It does not address upper-air network size or station location issues.
The effects of reducing the precision of temperature data, incomplete sampling of the diurnal cycle, incomplete sampling of the days of the month, imperfect long-term stability of the observations, and changes in observation schedule are assessed. To ensure accurate monthly climate statistics, observations with at least 0.5-K precision, made at least twice daily, at least once every two or three days are sufficient. Using these same criteria, and maintaining long-term measurement stability to within 0.25 (0.1) K, for periods of 20 to 50 yr, errors in trend estimates can be avoided in at least 90% (95%) of cases. In practical terms, this requires no more than one intervention (e.g., instrument change) over the period of record, and its effect must be to change the measurement bias by no more than 0.25 (0.1) K. The effect of the first intervention dominates the effects of subsequent, uncorrelated interventions. Changes in observation schedule also affect trend estimates. Reducing the number of observations per day, or changing the timing of a single observation per day, has a greater potential to produce errors in trends than reducing the number of days per month on which observations are made.
These findings depend on the validity of using reanalysis data to approximate the statistical nature of future climate variations, and on the statistical tests employed. However, the results are based on conservative assumptions, so that adopting observing system requirements based on this analysis should result in a data archive that will meet climate monitoring needs over the next 50 yr.
Abstract
In comparisons of radiosonde vertical temperature trend profiles with comparable profiles derived from selected Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) general circulation models (GCMs) driven by major external forcings of the latter part of the twentieth century, model trends exhibit a positive bias relative to radiosonde trends in the majority of cases for both time periods examined (1960–99 and 1979–99). Homogeneity adjustments made in the Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) and Hadley Centre Atmospheric Temperatures, version 2 (HadAT2), radiosonde datasets, which are applied by dataset developers to account for time-varying biases introduced by historical changes in instruments and measurement practices, reduce the relative bias in most cases. Although some differences were found between the two observed datasets, in general the observed trend profiles were more similar to one another than either was to the GCM profiles.
In the troposphere, adjustment has a greater impact on improving agreement of the shapes of the trend profiles than on improving agreement of the layer mean trends, whereas in the stratosphere the opposite is true. Agreement between the shapes of GCM and radiosonde trend profiles is generally better in the stratosphere than the troposphere, with more complexity to the profiles in the latter than the former. In the troposphere the tropics exhibit the poorest agreement between GCM and radiosonde trend profiles, but also the largest improvement in agreement resulting from homogeneity adjustment.
In the stratosphere, radiosonde trends indicate more cooling than GCMs. For the 1979–99 period, a disproportionate amount of this discrepancy arises several months after the eruption of Mount Pinatubo, at which time temperatures in the radiosonde time series cool abruptly by ∼0.5 K compared to those derived from GCMs, and this difference persists to the end of the record.
Abstract
In comparisons of radiosonde vertical temperature trend profiles with comparable profiles derived from selected Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4) general circulation models (GCMs) driven by major external forcings of the latter part of the twentieth century, model trends exhibit a positive bias relative to radiosonde trends in the majority of cases for both time periods examined (1960–99 and 1979–99). Homogeneity adjustments made in the Radiosonde Atmospheric Temperature Products for Assessing Climate (RATPAC) and Hadley Centre Atmospheric Temperatures, version 2 (HadAT2), radiosonde datasets, which are applied by dataset developers to account for time-varying biases introduced by historical changes in instruments and measurement practices, reduce the relative bias in most cases. Although some differences were found between the two observed datasets, in general the observed trend profiles were more similar to one another than either was to the GCM profiles.
In the troposphere, adjustment has a greater impact on improving agreement of the shapes of the trend profiles than on improving agreement of the layer mean trends, whereas in the stratosphere the opposite is true. Agreement between the shapes of GCM and radiosonde trend profiles is generally better in the stratosphere than the troposphere, with more complexity to the profiles in the latter than the former. In the troposphere the tropics exhibit the poorest agreement between GCM and radiosonde trend profiles, but also the largest improvement in agreement resulting from homogeneity adjustment.
In the stratosphere, radiosonde trends indicate more cooling than GCMs. For the 1979–99 period, a disproportionate amount of this discrepancy arises several months after the eruption of Mount Pinatubo, at which time temperatures in the radiosonde time series cool abruptly by ∼0.5 K compared to those derived from GCMs, and this difference persists to the end of the record.
Abstract
Long-term changes in the intensity of tropical cyclones are of considerable interest because of concern that greenhouse warming may increase storm damage. The potential intensity (PI) of tropical cyclones can be calculated from thermodynamic principles, given the state of the sea surface and atmosphere, and has been shown in earlier studies to give a reasonable estimate of maximum intensity for observed storms. The PI calculated using radiosonde data at 14 tropical island locations shows only small, statistically insignificant trends from 1980 to 1995 and from 1975 to 1995. In the mid-1990s PI at most of these stations does not show the strong increase that appears in global and regional PI calculated from reanalysis data. Comparison with results derived from reanalysis data suggests that previous adjustments to the reanalysis-derived PI may overstate PI after 1980 in some regions in comparison with that before 1980. Both reanalysis and radiosonde PI show similar interannual variability in most regions, much of which appears to be related to ENSO and other changes in SST. Between 1975 and 1980, however, while SSTs rose, PI decreased, illustrating the hazards of predicting changes in hurricane intensity from projected SST changes alone.
Abstract
Long-term changes in the intensity of tropical cyclones are of considerable interest because of concern that greenhouse warming may increase storm damage. The potential intensity (PI) of tropical cyclones can be calculated from thermodynamic principles, given the state of the sea surface and atmosphere, and has been shown in earlier studies to give a reasonable estimate of maximum intensity for observed storms. The PI calculated using radiosonde data at 14 tropical island locations shows only small, statistically insignificant trends from 1980 to 1995 and from 1975 to 1995. In the mid-1990s PI at most of these stations does not show the strong increase that appears in global and regional PI calculated from reanalysis data. Comparison with results derived from reanalysis data suggests that previous adjustments to the reanalysis-derived PI may overstate PI after 1980 in some regions in comparison with that before 1980. Both reanalysis and radiosonde PI show similar interannual variability in most regions, much of which appears to be related to ENSO and other changes in SST. Between 1975 and 1980, however, while SSTs rose, PI decreased, illustrating the hazards of predicting changes in hurricane intensity from projected SST changes alone.
Abstract
A homogeneity-adjusted dataset of total cloud cover from weather stations in the contiguous United States is compared with cloud cover in four state-of-the-art global reanalysis products: the Climate Forecast System Reanalysis from NCEP, the Modern-Era Retrospective Analysis for Research and Applications from NASA, ERA-Interim from ECMWF, and the Japanese 55-year Reanalysis Project from the Japan Meteorological Agency. The reanalysis products examined in this study generally show much lower cloud amount than visual weather station data, and this underestimation appears to be generally consistent with their overestimation of downward surface shortwave fluxes when compared with surface radiation data from the Surface Radiation Network. Nevertheless, the reanalysis products largely succeed in simulating the main aspects of interannual variability of cloudiness for large-scale means, as measured by correlations of 0.81–0.90 for U.S. mean time series. Trends in the reanalysis datasets for the U.S. mean for 1979–2009, ranging from −0.38% to −1.8% decade−1, are in the same direction as the trend in surface data (−0.50% decade−1), but further effort is needed to understand the discrepancies in their magnitudes.
Abstract
A homogeneity-adjusted dataset of total cloud cover from weather stations in the contiguous United States is compared with cloud cover in four state-of-the-art global reanalysis products: the Climate Forecast System Reanalysis from NCEP, the Modern-Era Retrospective Analysis for Research and Applications from NASA, ERA-Interim from ECMWF, and the Japanese 55-year Reanalysis Project from the Japan Meteorological Agency. The reanalysis products examined in this study generally show much lower cloud amount than visual weather station data, and this underestimation appears to be generally consistent with their overestimation of downward surface shortwave fluxes when compared with surface radiation data from the Surface Radiation Network. Nevertheless, the reanalysis products largely succeed in simulating the main aspects of interannual variability of cloudiness for large-scale means, as measured by correlations of 0.81–0.90 for U.S. mean time series. Trends in the reanalysis datasets for the U.S. mean for 1979–2009, ranging from −0.38% to −1.8% decade−1, are in the same direction as the trend in surface data (−0.50% decade−1), but further effort is needed to understand the discrepancies in their magnitudes.
Abstract
The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade−1 for 1960–97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade−1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.
Abstract
The utility of a “first difference” method for producing temporally homogeneous large-scale mean time series is assessed. Starting with monthly averages, the method involves dropping data around the time of suspected discontinuities and then calculating differences in temperature from one year to the next, resulting in a time series of year-to-year differences for each month at each station. These first difference time series are then combined to form large-scale means, and mean temperature time series are constructed from the first difference series. When applied to radiosonde temperature data, the method introduces random errors that decrease with the number of station time series used to create the large-scale time series and increase with the number of temporal gaps in the station time series. Root-mean-square errors for annual means of datasets produced with this method using over 500 stations are estimated at no more than 0.03 K, with errors in trends less than 0.02 K decade−1 for 1960–97 at 500 mb. For a 50-station dataset, errors in trends in annual global means introduced by the first differencing procedure may be as large as 0.06 K decade−1 (for six breaks per series), which is greater than the standard error of the trend. Although the first difference method offers significant resource and labor advantages over methods that attempt to adjust the data, it introduces an error in large-scale mean time series that may be unacceptable in some cases.