Search Results
You are looking at 1 - 10 of 19 items for
- Author or Editor: Bomin Sun x
- Refine by Access: All Content x
Abstract
This paper presents evidence of significant discontinuities in U.S. cloud cover data from the Integrated Surface Database (ISD) and its predecessor datasets. While long-term U.S. cloud records have some well-known homogeneity problems related to the introduction of the Automated Surface Observing System (ASOS) in the 1990s, the change to the international standard reporting format [aviation routine weather report (METAR)] in the United States in July 1996 introduces an additional inhomogeneity at many of the stations where humans still make or supplement cloud observations. This change is associated with an upward shift in total cloud of 0.1%–10%, statistically significant at 95 of 172 stations. The shift occurs at both National Weather Service and military weather stations, producing a mean increase in total cloud of 2%–3%. This suggests that the positive trends in U.S. cloud cover reported by other researchers for recent time periods may be exaggerated, a conclusion that is supported by comparisons with precipitation and diurnal temperature range data.
Additional discontinuities exist at other times in the frequency distributions of fractional cloud cover at the majority of stations, many of which may be explained by changes in the sources and types of data included in ISD. Some of these result in noticeable changes in monthly-mean total cloud. The current U.S. cloud cover database needs thorough homogeneity testing and adjustment before it can be used with confidence for trend assessment or satellite product validation.
Abstract
This paper presents evidence of significant discontinuities in U.S. cloud cover data from the Integrated Surface Database (ISD) and its predecessor datasets. While long-term U.S. cloud records have some well-known homogeneity problems related to the introduction of the Automated Surface Observing System (ASOS) in the 1990s, the change to the international standard reporting format [aviation routine weather report (METAR)] in the United States in July 1996 introduces an additional inhomogeneity at many of the stations where humans still make or supplement cloud observations. This change is associated with an upward shift in total cloud of 0.1%–10%, statistically significant at 95 of 172 stations. The shift occurs at both National Weather Service and military weather stations, producing a mean increase in total cloud of 2%–3%. This suggests that the positive trends in U.S. cloud cover reported by other researchers for recent time periods may be exaggerated, a conclusion that is supported by comparisons with precipitation and diurnal temperature range data.
Additional discontinuities exist at other times in the frequency distributions of fractional cloud cover at the majority of stations, many of which may be explained by changes in the sources and types of data included in ISD. Some of these result in noticeable changes in monthly-mean total cloud. The current U.S. cloud cover database needs thorough homogeneity testing and adjustment before it can be used with confidence for trend assessment or satellite product validation.
Abstract
Cloud cover data from ground-based weather observers can be an important source of climate information, but the record of such observations in the United States is disrupted by the introduction of automated observing systems and other artificial shifts that interfere with our ability to assess changes in cloudiness at climate time scales. A new dataset using 54 National Weather Service (NWS) and 101 military stations that continued to make human-augmented cloud observations after the 1990s has been adjusted using statistical changepoint detection and visual scrutiny. The adjustments substantially reduce the trends in U.S. mean total cloud cover while increasing the agreement between the cloud cover time series and those of physically related climate variables. For 1949–2009, the adjusted time series give a trend in U.S. mean total cloud of 0.11% ± 0.22% decade−1 for the military data, 0.55% ± 0.24% decade−1 for the NWS data, and 0.31% ± 0.22% decade−1 for the combined dataset. These trends are less than one-half of those in the original data. For 1976–2004, the original data give a significant increase but the adjusted data show an insignificant trend from −0.17% decade−1 (military stations) to 0.66% decade−1 (NWS stations). Trends have notable regional variability, with the northwest United States showing declining total cloud cover for all time periods examined, while trends for most other regions are positive. Differences between trends in the adjusted datasets from military stations and NWS stations may be rooted in the difference in data source and reflect the uncertainties in the homogeneity adjustment process.
Abstract
Cloud cover data from ground-based weather observers can be an important source of climate information, but the record of such observations in the United States is disrupted by the introduction of automated observing systems and other artificial shifts that interfere with our ability to assess changes in cloudiness at climate time scales. A new dataset using 54 National Weather Service (NWS) and 101 military stations that continued to make human-augmented cloud observations after the 1990s has been adjusted using statistical changepoint detection and visual scrutiny. The adjustments substantially reduce the trends in U.S. mean total cloud cover while increasing the agreement between the cloud cover time series and those of physically related climate variables. For 1949–2009, the adjusted time series give a trend in U.S. mean total cloud of 0.11% ± 0.22% decade−1 for the military data, 0.55% ± 0.24% decade−1 for the NWS data, and 0.31% ± 0.22% decade−1 for the combined dataset. These trends are less than one-half of those in the original data. For 1976–2004, the original data give a significant increase but the adjusted data show an insignificant trend from −0.17% decade−1 (military stations) to 0.66% decade−1 (NWS stations). Trends have notable regional variability, with the northwest United States showing declining total cloud cover for all time periods examined, while trends for most other regions are positive. Differences between trends in the adjusted datasets from military stations and NWS stations may be rooted in the difference in data source and reflect the uncertainties in the homogeneity adjustment process.
Abstract
Several changes in U.S. observational practice [in particular, the introduction of the Automated Surface Observing System (ASOS) in the early 1990s] have led to a challenging heterogeneity of time series of most ground-based cloud observations. In this article, an attempt is made to preserve/restore the time series of average low cloud cover (LCC) over the country up to the year 2001 using cloud sky condition and cloud-base height information collected in the national archive data and to describe its spatial and temporal variability. The switch from human observations to ASOS can be bridged through the use of frequency of overcast/broken cloudiness. During the past 52 yr, the nationwide LCC appears to exhibit a significant increase but all of this increase occurred prior to the early 1980s and thereafter tends to decrease. This finding is consistent with similar changes in the frequency of days with precipitation. When the cloud-type information was still available (i.e., during the pre-ASOS period), it was found that the overall LCC increase was due to the increase in stratiform and cumulonimbus cloud occurrences while cumulus cloud frequency decreased.
Abstract
Several changes in U.S. observational practice [in particular, the introduction of the Automated Surface Observing System (ASOS) in the early 1990s] have led to a challenging heterogeneity of time series of most ground-based cloud observations. In this article, an attempt is made to preserve/restore the time series of average low cloud cover (LCC) over the country up to the year 2001 using cloud sky condition and cloud-base height information collected in the national archive data and to describe its spatial and temporal variability. The switch from human observations to ASOS can be bridged through the use of frequency of overcast/broken cloudiness. During the past 52 yr, the nationwide LCC appears to exhibit a significant increase but all of this increase occurred prior to the early 1980s and thereafter tends to decrease. This finding is consistent with similar changes in the frequency of days with precipitation. When the cloud-type information was still available (i.e., during the pre-ASOS period), it was found that the overall LCC increase was due to the increase in stratiform and cumulonimbus cloud occurrences while cumulus cloud frequency decreased.
Abstract
Daily latent and sensible heat fluxes for the Atlantic Ocean from 1988 to 1999 with 1° × 1° resolution have been recently developed at Woods Hole Oceanographic Institution (WHOI) by using a variational object analysis approach. The present study evaluated the degree of improvement made by the WHOI analysis using in situ buoy/ship measurements as verification data. The measurements were taken from the following field experiments: the five-buoy Subduction Experiment in the eastern subtropical North Atlantic, three coastal field programs in the western Atlantic, two winter cruises by R/V Knorr from the Labrador Sea Deep Convection Experiment, and the Pilot Research Moored Array in the Tropical Atlantic (PIRATA). The differences between the observed and the WHOI-analyzed fluxes and surface meteorological variables were quantified. Comparisons with the outputs from two numerical weather prediction (NWP) models were also conducted.
The mean and daily variability of the latent and sensible heat fluxes from the WHOI analysis are an improvement over the NWP fluxes at all of the measurement sites. The improved flux representation is due to the use of not only a better flux algorithm but also the improved estimates for flux-related variables. The mean differences from the observations in latent heat flux and sensible heat flux, respectively, range from 2.9 (3% of the corresponding mean measurement value) and 1.0 W m−2 (13%) at the Subduction Experiment site, to 11.9 (13%) and 0.7 W m−2 (11%) across the PIRATA array, to 15.9 (20%) and 10.5 W m−2 (34%) at the coastal buoy sites, to 8.7 (7%) and 9.7 W m−2 (6%) along the Knorr cruise tracks. The study also suggests that further improvement in the accuracy of latent and sensible heat fluxes will depend on the availability of high-quality SST observations and improved representation/observations of air humidity in the tropical Atlantic.
Abstract
Daily latent and sensible heat fluxes for the Atlantic Ocean from 1988 to 1999 with 1° × 1° resolution have been recently developed at Woods Hole Oceanographic Institution (WHOI) by using a variational object analysis approach. The present study evaluated the degree of improvement made by the WHOI analysis using in situ buoy/ship measurements as verification data. The measurements were taken from the following field experiments: the five-buoy Subduction Experiment in the eastern subtropical North Atlantic, three coastal field programs in the western Atlantic, two winter cruises by R/V Knorr from the Labrador Sea Deep Convection Experiment, and the Pilot Research Moored Array in the Tropical Atlantic (PIRATA). The differences between the observed and the WHOI-analyzed fluxes and surface meteorological variables were quantified. Comparisons with the outputs from two numerical weather prediction (NWP) models were also conducted.
The mean and daily variability of the latent and sensible heat fluxes from the WHOI analysis are an improvement over the NWP fluxes at all of the measurement sites. The improved flux representation is due to the use of not only a better flux algorithm but also the improved estimates for flux-related variables. The mean differences from the observations in latent heat flux and sensible heat flux, respectively, range from 2.9 (3% of the corresponding mean measurement value) and 1.0 W m−2 (13%) at the Subduction Experiment site, to 11.9 (13%) and 0.7 W m−2 (11%) across the PIRATA array, to 15.9 (20%) and 10.5 W m−2 (34%) at the coastal buoy sites, to 8.7 (7%) and 9.7 W m−2 (6%) along the Knorr cruise tracks. The study also suggests that further improvement in the accuracy of latent and sensible heat fluxes will depend on the availability of high-quality SST observations and improved representation/observations of air humidity in the tropical Atlantic.
Abstract
A new daily latent and sensible flux product developed at the Woods Hole Oceanographic Institution (WHOI) with 1° × 1° resolution for the Atlantic Ocean (65°S–65°N) for the period from 1988 to 1999 was presented. The flux product was developed by using a variational objective analysis approach to obtain best estimates of the flux-related basic surface meteorological variables (e.g., wind speed, air humidity, air temperature, and sea surface temperature) through synthesizing satellite data and outputs of numerical weather prediction (NWP) models at the National Centers for Environmental Prediction (NCEP) and the European Centre for Medium-Range Weather Forecasts (ECMWF). The state-of-the-art bulk flux algorithm 2.6a, developed from the field experiments of the Coupled Ocean–Atmosphere Response Experiment (COARE), was applied to compute the flux fields.
The study focused on analyzing the mean field properties of the WHOI daily latent and sensible heat fluxes and their comparisons with the ship-based climatology from the Southampton Oceanography Centre (SOC) and NWP outputs. It is found that the WHOI yearly mean fluxes are consistent with the SOC climatology in both structure and amplitude, but the WHOI yearly mean basic variables are not always consistent with SOC; the better agreement in the fluxes is due to the effects of error compensation during variable combinations. Both ECMWF and NCEP–Department of Energy (DOE) Atmospheric Model Intercomparison Project (AMIP) Reanalysis-2 (NCEP2) model data have larger turbulent heat loss (∼20 W m−2) than the WHOI product. Nevertheless, the WHOI fluxes agree well with the NCEP-2 Reanalysis fluxes in structure and the trend of year-to-year variations, but not with the ECMWF operational outputs; the latter have a few abrupt changes coinciding with the modifications in the model forecast–analysis system. The degree of impact of the model changes on the basic variables is not as dramatic, a factor that justifies the inclusion of the basic variables, not the fluxes, from the ECMWF operational model in the synthesis. The flux algorithms of the two NWP models give a larger latent and sensible heat loss. Recalculating the NWP fluxes using the COARE algorithm considerably reduces the strength but does not replicate the WHOI results. The present analysis could not quantify the degree of improvement in the mean aspect of the WHOI daily flux fields as accurate basinwide verification data are lacking.
This study is the first to demonstrate that the synthesis approach is a useful tool for combining the NWP and satellite data sources and improving the mean representativeness of daily basic variable fields and, hence, the daily flux fields. It is anticipated that such an approach may become increasingly relied upon in the preparation of future high-quality flux products.
Abstract
A new daily latent and sensible flux product developed at the Woods Hole Oceanographic Institution (WHOI) with 1° × 1° resolution for the Atlantic Ocean (65°S–65°N) for the period from 1988 to 1999 was presented. The flux product was developed by using a variational objective analysis approach to obtain best estimates of the flux-related basic surface meteorological variables (e.g., wind speed, air humidity, air temperature, and sea surface temperature) through synthesizing satellite data and outputs of numerical weather prediction (NWP) models at the National Centers for Environmental Prediction (NCEP) and the European Centre for Medium-Range Weather Forecasts (ECMWF). The state-of-the-art bulk flux algorithm 2.6a, developed from the field experiments of the Coupled Ocean–Atmosphere Response Experiment (COARE), was applied to compute the flux fields.
The study focused on analyzing the mean field properties of the WHOI daily latent and sensible heat fluxes and their comparisons with the ship-based climatology from the Southampton Oceanography Centre (SOC) and NWP outputs. It is found that the WHOI yearly mean fluxes are consistent with the SOC climatology in both structure and amplitude, but the WHOI yearly mean basic variables are not always consistent with SOC; the better agreement in the fluxes is due to the effects of error compensation during variable combinations. Both ECMWF and NCEP–Department of Energy (DOE) Atmospheric Model Intercomparison Project (AMIP) Reanalysis-2 (NCEP2) model data have larger turbulent heat loss (∼20 W m−2) than the WHOI product. Nevertheless, the WHOI fluxes agree well with the NCEP-2 Reanalysis fluxes in structure and the trend of year-to-year variations, but not with the ECMWF operational outputs; the latter have a few abrupt changes coinciding with the modifications in the model forecast–analysis system. The degree of impact of the model changes on the basic variables is not as dramatic, a factor that justifies the inclusion of the basic variables, not the fluxes, from the ECMWF operational model in the synthesis. The flux algorithms of the two NWP models give a larger latent and sensible heat loss. Recalculating the NWP fluxes using the COARE algorithm considerably reduces the strength but does not replicate the WHOI results. The present analysis could not quantify the degree of improvement in the mean aspect of the WHOI daily flux fields as accurate basinwide verification data are lacking.
This study is the first to demonstrate that the synthesis approach is a useful tool for combining the NWP and satellite data sources and improving the mean representativeness of daily basic variable fields and, hence, the daily flux fields. It is anticipated that such an approach may become increasingly relied upon in the preparation of future high-quality flux products.
Abstract
Surface meteorological variables and turbulent heat fluxes in the National Centers for Environmental Prediction–National Center for Atmospheric Research reanalyses 1 and 2 (NCEP1 and NCEP2) and the analysis from the operational system of the European Centre for Medium-Range Weather Forecasts (ECMWF) are compared with high-quality moored buoy observations in regions of the Atlantic including the eastern North Atlantic, the coastal regions of the western North Atlantic, and the Tropics. The buoy latent and sensible heat fluxes are determined from buoy measurements using the recently improved Tropical Ocean Global Atmosphere Coupled Ocean–Atmosphere Response Experiment (TOGA COARE) flux algorithm.
The time mean oceanic heat loss from the model analyses is systematically overestimated in all the regions. The overestimation in latent heat loss ranges from about 14 W m−2 (13%) in the eastern subtropical North Atlantic to about 29 W m−2 (30%) in the Tropics to about 30 W m−2 (49%) in the midlatitude coastal areas, where the overestimation in sensible heat flux reaches about 20 W m−2 (60%). Depending upon the region and the NWP model, these systematic overestimations are either reduced, or change to underestimations, or remain unchanged when the TOGA COARE flux algorithm is used to recalculate the fluxes. The bias in surface meteorological variables, one of the major factors related to the biases in the revised NWP heat fluxes, varies with region and NWP analysis. Generally the temperature and humidity biases in the coastal regions are much larger than other regions. In the extratropical regions, NCEP1 and NCEP2 generally show a wet bias, which is mainly responsible for the underestimation in the revised NWP latent heat loss. In the Tropics a dry bias is found in the NWP analyses, particularly in ECMWF and NCEP2, which contributes to the overestimation in the revised NWP latent heat loss. Compared to NCEP1, NCEP2 shows less cold bias in 2-m air temperature and thus less biased sensible heat flux; NCEP2 also shows less humid bias in 2-m humidity in the extratropical regions but more dry bias in 2-m humidity in the Tropics, either of which leads to a more biased latent heat flux in NCEP2.
Despite the significant biases in the NWP surface fields and the poor representation of short-time sea surface temperature variability, the NWP models are able to represent the dominant short-time variability in other basic variables and thus the variability in heat fluxes in the wintertime coastal regions of the western North Atlantic (on timescales of 3–4 days and 1 week) and the northern and southern subtropical regions (on a timescale of about 2 weeks), but ECMWF and particularly the NCEP analyses do not represent well the 2–3-week variability in the tropical Atlantic.
Abstract
Surface meteorological variables and turbulent heat fluxes in the National Centers for Environmental Prediction–National Center for Atmospheric Research reanalyses 1 and 2 (NCEP1 and NCEP2) and the analysis from the operational system of the European Centre for Medium-Range Weather Forecasts (ECMWF) are compared with high-quality moored buoy observations in regions of the Atlantic including the eastern North Atlantic, the coastal regions of the western North Atlantic, and the Tropics. The buoy latent and sensible heat fluxes are determined from buoy measurements using the recently improved Tropical Ocean Global Atmosphere Coupled Ocean–Atmosphere Response Experiment (TOGA COARE) flux algorithm.
The time mean oceanic heat loss from the model analyses is systematically overestimated in all the regions. The overestimation in latent heat loss ranges from about 14 W m−2 (13%) in the eastern subtropical North Atlantic to about 29 W m−2 (30%) in the Tropics to about 30 W m−2 (49%) in the midlatitude coastal areas, where the overestimation in sensible heat flux reaches about 20 W m−2 (60%). Depending upon the region and the NWP model, these systematic overestimations are either reduced, or change to underestimations, or remain unchanged when the TOGA COARE flux algorithm is used to recalculate the fluxes. The bias in surface meteorological variables, one of the major factors related to the biases in the revised NWP heat fluxes, varies with region and NWP analysis. Generally the temperature and humidity biases in the coastal regions are much larger than other regions. In the extratropical regions, NCEP1 and NCEP2 generally show a wet bias, which is mainly responsible for the underestimation in the revised NWP latent heat loss. In the Tropics a dry bias is found in the NWP analyses, particularly in ECMWF and NCEP2, which contributes to the overestimation in the revised NWP latent heat loss. Compared to NCEP1, NCEP2 shows less cold bias in 2-m air temperature and thus less biased sensible heat flux; NCEP2 also shows less humid bias in 2-m humidity in the extratropical regions but more dry bias in 2-m humidity in the Tropics, either of which leads to a more biased latent heat flux in NCEP2.
Despite the significant biases in the NWP surface fields and the poor representation of short-time sea surface temperature variability, the NWP models are able to represent the dominant short-time variability in other basic variables and thus the variability in heat fluxes in the wintertime coastal regions of the western North Atlantic (on timescales of 3–4 days and 1 week) and the northern and southern subtropical regions (on a timescale of about 2 weeks), but ECMWF and particularly the NCEP analyses do not represent well the 2–3-week variability in the tropical Atlantic.
Abstract
A homogeneity-adjusted dataset of total cloud cover from weather stations in the contiguous United States is compared with cloud cover in four state-of-the-art global reanalysis products: the Climate Forecast System Reanalysis from NCEP, the Modern-Era Retrospective Analysis for Research and Applications from NASA, ERA-Interim from ECMWF, and the Japanese 55-year Reanalysis Project from the Japan Meteorological Agency. The reanalysis products examined in this study generally show much lower cloud amount than visual weather station data, and this underestimation appears to be generally consistent with their overestimation of downward surface shortwave fluxes when compared with surface radiation data from the Surface Radiation Network. Nevertheless, the reanalysis products largely succeed in simulating the main aspects of interannual variability of cloudiness for large-scale means, as measured by correlations of 0.81–0.90 for U.S. mean time series. Trends in the reanalysis datasets for the U.S. mean for 1979–2009, ranging from −0.38% to −1.8% decade−1, are in the same direction as the trend in surface data (−0.50% decade−1), but further effort is needed to understand the discrepancies in their magnitudes.
Abstract
A homogeneity-adjusted dataset of total cloud cover from weather stations in the contiguous United States is compared with cloud cover in four state-of-the-art global reanalysis products: the Climate Forecast System Reanalysis from NCEP, the Modern-Era Retrospective Analysis for Research and Applications from NASA, ERA-Interim from ECMWF, and the Japanese 55-year Reanalysis Project from the Japan Meteorological Agency. The reanalysis products examined in this study generally show much lower cloud amount than visual weather station data, and this underestimation appears to be generally consistent with their overestimation of downward surface shortwave fluxes when compared with surface radiation data from the Surface Radiation Network. Nevertheless, the reanalysis products largely succeed in simulating the main aspects of interannual variability of cloudiness for large-scale means, as measured by correlations of 0.81–0.90 for U.S. mean time series. Trends in the reanalysis datasets for the U.S. mean for 1979–2009, ranging from −0.38% to −1.8% decade−1, are in the same direction as the trend in surface data (−0.50% decade−1), but further effort is needed to understand the discrepancies in their magnitudes.
Abstract
One of the possible ways to check the adequacy of the physical description of meteorological elements in global climate models (GCMs) is to compare the statistical structure of these elements reproduced by models with empirical data from the world climate observational system. The success in GCM development warranted a further step in this assessment. The description of the meteorological element in the model can be considered adequate if, with a proper reproduction of the mean and variability of this element (as shown by the observational system), the model properly reproduces the internal relationships between this element and other climatic variables (as observed during the past several decades). Therefore, to distinguish more reliable models, the authors suggest first analyzing these relationships, “the behavior of the climatic system,” using observational data and then testing the GCMs’ output against this behavior.
In this paper, the authors calculated a set of statistics from synoptic data of the past several decades and compared them with the outputs of seven GCMs participating in the Atmospheric Model Intercomparison Project (AMIP), focusing on cloud cover, one of the major trouble spots for which parameterizations are still not well established, and its interaction with other meteorological fields. Differences between long-term mean values of surface air temperature and atmospheric humidity for average and clear sky or for average and overcast conditions characterize the long-term noncausal associations between these two elements and total cloud cover. Not all the GCMs reproduce these associations properly. For example, there was a general agreement in reproducing mean daily cloud–temperature associations in the cold season among all models tested, but large discrepancies between empirical data and some models are found for summer conditions. A correct reproduction of the diurnal cycle of cloud–temperature associations in the warm season is still a major challenge for two of the GCMs that were tested.
Abstract
One of the possible ways to check the adequacy of the physical description of meteorological elements in global climate models (GCMs) is to compare the statistical structure of these elements reproduced by models with empirical data from the world climate observational system. The success in GCM development warranted a further step in this assessment. The description of the meteorological element in the model can be considered adequate if, with a proper reproduction of the mean and variability of this element (as shown by the observational system), the model properly reproduces the internal relationships between this element and other climatic variables (as observed during the past several decades). Therefore, to distinguish more reliable models, the authors suggest first analyzing these relationships, “the behavior of the climatic system,” using observational data and then testing the GCMs’ output against this behavior.
In this paper, the authors calculated a set of statistics from synoptic data of the past several decades and compared them with the outputs of seven GCMs participating in the Atmospheric Model Intercomparison Project (AMIP), focusing on cloud cover, one of the major trouble spots for which parameterizations are still not well established, and its interaction with other meteorological fields. Differences between long-term mean values of surface air temperature and atmospheric humidity for average and clear sky or for average and overcast conditions characterize the long-term noncausal associations between these two elements and total cloud cover. Not all the GCMs reproduce these associations properly. For example, there was a general agreement in reproducing mean daily cloud–temperature associations in the cold season among all models tested, but large discrepancies between empirical data and some models are found for summer conditions. A correct reproduction of the diurnal cycle of cloud–temperature associations in the warm season is still a major challenge for two of the GCMs that were tested.
Automated Surface Observation Systems (ASOS) were widely introduced to replace manned weather stations around the mid- 1990s over North America and other parts of the world. While laser beam ceilometers of the ASOS in North America measure overhead clouds within the lower 3.6 km of the atmosphere, they do not contain cloud-type and opacity information and are not comparable with previous cloud records. However, a network of 124 U.S. military weather stations with continuous human observations provides useful information of total cloud cover over the contiguous United States, thus lessening the disruption caused by the ASOS. Analyses of the military cloud data suggest an increasing trend (~1.4% of the sky cover per decade) in U.S. total cloud cover from 1976 to 2004, with increases over most of the country except the Northwest, although large uncertainties exist because of sparse spatial sampling. Thus, inadequacies exist in surface observations of global cloud amounts and types, especially over the oceans, Canada, and the United States since the mid- 1990s. The problem is compounded by inhomogeneities in satellite cloud data. Reprocessing of satellite data has the potential for improvements if priority is given to the improved continuity of records.
Automated Surface Observation Systems (ASOS) were widely introduced to replace manned weather stations around the mid- 1990s over North America and other parts of the world. While laser beam ceilometers of the ASOS in North America measure overhead clouds within the lower 3.6 km of the atmosphere, they do not contain cloud-type and opacity information and are not comparable with previous cloud records. However, a network of 124 U.S. military weather stations with continuous human observations provides useful information of total cloud cover over the contiguous United States, thus lessening the disruption caused by the ASOS. Analyses of the military cloud data suggest an increasing trend (~1.4% of the sky cover per decade) in U.S. total cloud cover from 1976 to 2004, with increases over most of the country except the Northwest, although large uncertainties exist because of sparse spatial sampling. Thus, inadequacies exist in surface observations of global cloud amounts and types, especially over the oceans, Canada, and the United States since the mid- 1990s. The problem is compounded by inhomogeneities in satellite cloud data. Reprocessing of satellite data has the potential for improvements if priority is given to the improved continuity of records.
Abstract
U.S. weather stations operated by NOAA’s National Weather Service (NWS) have undergone significant changes in reporting and measuring cloud ceilings. Stations operated by the Department of Defense have maintained more consistent reporting practices. By comparing cloud-ceiling data from 223 NWS first-order stations with those from 117 military stations, and by further comparison with changes in physically related parameters, inhomogeneous records, including all NWS records based only on automated observing systems and the military records prior to the early 1960s, were identified and discarded. Data from the two networks were then used to determine changes in daytime ceiling height (the above-ground height of the lowest sky-cover layer that is more than half opaque) and ceiling occurrence frequency (percentage of total observations that have ceilings) over the contiguous United States since the 1950s.
Cloud-ceiling height in the surface–3.6-km layer generally increased during 1951–2003, with more significant changes in the period after the early 1970s and in the surface–2-km layer. These increases were mostly over the western United States and in the coastal regions. No significant change was found in surface–3.6-km ceiling occurrence during 1951–2003, but during the period since the early 1970s, there is a tendency for a decrease in frequency of ceilings with height below 3.6 km. Cloud-ceiling heights above 3.6 km have shown no significant changes in the past 30 yr, but there has been an increase in frequency, consistent with the increase in ceiling height below 3.6 km. For the surface–3.6-km layer, physically consistent changes were identified as related to changes in ceiling height and frequency of occurrence. This included reductions in precipitation frequency related to low ceiling frequency, and surface warming and decreasing relative humidity accompanying increasing ceiling heights during the past 30 yr.
Abstract
U.S. weather stations operated by NOAA’s National Weather Service (NWS) have undergone significant changes in reporting and measuring cloud ceilings. Stations operated by the Department of Defense have maintained more consistent reporting practices. By comparing cloud-ceiling data from 223 NWS first-order stations with those from 117 military stations, and by further comparison with changes in physically related parameters, inhomogeneous records, including all NWS records based only on automated observing systems and the military records prior to the early 1960s, were identified and discarded. Data from the two networks were then used to determine changes in daytime ceiling height (the above-ground height of the lowest sky-cover layer that is more than half opaque) and ceiling occurrence frequency (percentage of total observations that have ceilings) over the contiguous United States since the 1950s.
Cloud-ceiling height in the surface–3.6-km layer generally increased during 1951–2003, with more significant changes in the period after the early 1970s and in the surface–2-km layer. These increases were mostly over the western United States and in the coastal regions. No significant change was found in surface–3.6-km ceiling occurrence during 1951–2003, but during the period since the early 1970s, there is a tendency for a decrease in frequency of ceilings with height below 3.6 km. Cloud-ceiling heights above 3.6 km have shown no significant changes in the past 30 yr, but there has been an increase in frequency, consistent with the increase in ceiling height below 3.6 km. For the surface–3.6-km layer, physically consistent changes were identified as related to changes in ceiling height and frequency of occurrence. This included reductions in precipitation frequency related to low ceiling frequency, and surface warming and decreasing relative humidity accompanying increasing ceiling heights during the past 30 yr.