Search Results

You are looking at 1 - 10 of 11 items for

  • Author or Editor: Xungang Yin x
  • Refine by Access: All Content x
Clear All Modify Search
Xungang Yin and Sharon E. Nicholson

Abstract

This paper presents a water balance model for Lake Victoria that can be inverted to estimate annual rainfall over the lake. The model is calibrated using a fixed value of evaporation and the regression expressions for inflow, discharge, and rainfall. Rainfall totals at stations in the catchment are used to estimate over-lake rainfall, applying a regression between catchment and over-lake rainfall derived from satellite data. The inflow regression is authenticated using a cross-validation technique applied to inflow estimates for the years 1956–78, and the discharge regression is validated using discharge data for the years 1901–55. The model is first written as an autoregression (AR) model form for the lake-level term. Model predictions of lake level are verified by comparing them with measured lake levels for the time period 1931–94. In doing so, the model is initialized using the end-of-year lake level for 1930 and then using only over-lake rainfall as external input. Predicted levels of the lake levels are compared to the measured levels, with a correlation of 0.98. This confirms that fluctuations of Lake Victoria are driven predominantly by rainfall. The model is then “inverted” so that the current year's over-lake rainfall is expressed as a function of the two lake-level terms. If the beginning and ending lake levels in a year are known, the over-lake rainfall in the same year is easy to obtain. Applying the inverse model to the measured lake levels in 1899–1994, over-lake annual rainfall in 1900–94 is estimated. A comparison with over-lake rainfall for the period 1931–94 gives a root-mean-square error of 98 mm yr−1, corresponding to 6% of the over-lake annual mean rainfall. This model is also compared with the previous water balance model, which can be employed only for multiyear mean rainfall estimates. The two models complement each other, with the current model's advantage being the ability to calculate annual rainfall. However, the previous model still provides better estimates of multiyear means.

Full access
Xungang Yin, Arnold Gruber, and Phil Arkin

Abstract

The two monthly precipitation products of the Global Precipitation Climatology Project (GPCP) and the Climate Prediction Center (CPC) Merged Analysis of Precipitation (CMAP) are compared on a 23-yr period, January 1979–December 2001. For the long-term mean, major precipitation patterns are clearly demonstrated by both products, but there are differences in the pattern magnitudes. In the tropical ocean the CMAP is higher than the GPCP, but this is reversed in the high-latitude ocean. The GPCP–CMAP spatial correlation is generally higher over land than over the ocean. The correlation between the global mean oceanic GPCP and CMAP is significantly low. It is very likely because the input data of the two products have much less in common over the ocean; in particular, the use of atoll data by the CMAP is disputable. The decreasing trend in the CMAP oceanic precipitation is found to be an artifact of input data change and atoll sampling error. In general, overocean precipitation represented by the GPCP is more reasonable; over land the two products are close, but different merging algorithms between the GPCP and the CMAP can sometimes produce substantial discrepancy in sensitive areas such as equatorial West Africa. EOF analysis shows that the GPCP and the CMAP are similar in 6 out of the first 10 modes, and the first 2 leading modes (ENSO patterns) of the GPCP are nearly identical to their counterparts of the CMAP. Input data changes [e.g., January 1986 for Geostationary Operational Environmental Satellite (GOES) precipitation index (GPI), July 1987 for Special Sensor Microwave Imager (SSM/I), May 1994 for Microwave Sounding Unit (MSU), and January 1996 for atolls] have implications in the behavior of the two datasets. Several abrupt changes identified in the statistics of the two datasets including the changes in overocean precipitation, spatial correlation time series, and some of the EOF principal components, can be related to one or more input data changes.

Full access
Michael C. Kruk, Russell Vose, Richard Heim, Anthony Arguez, Jesse Enloe, Xungang Yin, and Trevor Wallis
Open access
Imke Durre, Michael F. Squires, Russell S. Vose, Xungang Yin, Anthony Arguez, and Scott Applequist

Abstract

The 1981–2010 “U.S. Climate Normals” released by the National Oceanic and Atmospheric Administration’s (NOAA) National Climatic Data Center include a suite of monthly, seasonal, and annual statistics that are based on precipitation, snowfall, and snow-depth measurements. This paper describes the procedures used to calculate the average totals, frequencies of occurrence, and percentiles that constitute these normals. All parameters were calculated from a single, state-of-the-art dataset of daily observations, taking care to produce normals that were as representative as possible of the full 1981–2010 period, even when the underlying data records were incomplete. In the resulting product, average precipitation totals are available at approximately 9300 stations across the United States and parts of the Caribbean Sea and Pacific Ocean islands. Snowfall and snow-depth statistics are provided for approximately 5300 of those stations, as compared with several hundred stations in the 1971–2000 normals. The 1981–2010 statistics exhibit the familiar climatological patterns across the contiguous United States. When compared with the same calculations for 1971–2000, the later period is characterized by a smaller number of days with snow on the ground and less total annual snowfall across much of the contiguous United States; wetter conditions over much of the Great Plains, Midwest, and northern California; and drier conditions over much of the Southeast and Pacific Northwest. These differences are a reflection of the removal of the 1970s and the addition of the 2000s to the 30-yr-normals period as part of this latest revision of the normals.

Full access
Scott Applequist, Anthony Arguez, Imke Durre, Michael F. Squires, Russell S. Vose, and Xungang Yin

The 1981–2010 U.S. Climate Normals released by the National Oceanic and Atmospheric Administration's (NOAA) National Climatic Data Center (NCDC) include a suite of descriptive statistics based on hourly observations. For each hour and day of the year, statistics of temperature, dew point, mean sea level pressure, wind, clouds, heat index, wind chill, and heating and cooling degree hours are provided as 30-year averages, frequencies of occurrence, and percentiles. These hourly normals are available for 262 locations, primarily major airports, from across the United States and its Pacific territories. We encourage use of these products specifically for examination of the diurnal cycle of a particular variable, and how that change may shift over the annual cycle.

Full access
Boyin Huang, Michelle L’Heureux, Zeng-Zhen Hu, Xungang Yin, and Huai-Min Zhang

Abstract

Previous research has shown that the 1877/78 El Niño resulted in great famine events around the world. However, the strength and statistical significance of this El Niño event have not been fully addressed, largely due to the lack of data. We take a closer look at the data using an ensemble analysis of the Extended Reconstructed Sea Surface Temperature version 5 (ERSSTv5). The ERSSTv5 standard run indicates a strong El Niño event with a peak monthly value of the Niño-3 index of 3.5°C during 1877/78, stronger than those during 1982/83, 1997/98, and 2015/16. However, an analysis of the ERSSTv5 ensemble runs indicates that the strength and significance (uncertainty estimates) depend on the construction of the ensembles. A 1000-member ensemble analysis shows that the ensemble mean Niño-3 index has a much weaker peak of 1.8°C, and its uncertainty is much larger during 1877/78 (2.8°C) than during 1982/83 (0.3°C), 1997/98 (0.2°C), and 2015/16 (0.1°C). Further, the large uncertainty during 1877/78 is associated with selections of a short (1 month) period of raw-data filter and a large (20%) acceptance criterion of empirical orthogonal teleconnection modes in the ERSSTv5 reconstruction. By adjusting these two parameters, the uncertainty during 1877/78 decreases to 0.5°C, while the peak monthly value of the Niño-3 index in the ensemble mean increases to 2.8°C, suggesting a strong and statistically significant 1877/78 El Niño event. The adjustment of those two parameters is validated by masking the modern observations of 1981–2017 to 1861–97. Based on the estimated uncertainties, the differences among the strength of these four major El Niño events are not statistically significant.

Open access
Imke Durre, Xungang Yin, Russell S. Vose, Scott Applequist, and Jeff Arnfield

Abstract

The Integrated Global Radiosonde Archive (IGRA) is a collection of historical and near-real-time radiosonde and pilot balloon observations from around the globe. Consisting of a foundational dataset of individual soundings, a set of sounding-derived parameters, and monthly means, the collection is maintained and distributed by the National Oceanic and Atmospheric Administration’s National Centers for Environmental Information (NCEI). It has been used in a variety of applications, including reanalysis projects, assessments of tropospheric and stratospheric temperature and moisture trends, a wide range of studies of atmospheric processes and structures, and as validation of observations from other observing platforms. In 2016, NCEI released version 2 of the dataset, IGRA 2, which incorporates data from a considerably greater number of data sources, thus increasing the data volume by 30%, extending the data back in time to as early as 1905, and improving the spatial coverage. To create IGRA 2, 40 data sources were converted into a common data format and merged into one coherent dataset using a newly designed suite of algorithms. Then, an overhauled version of the IGRA 1 quality-assurance system was applied to the integrated data. Last, monthly means and sounding-by-sounding moisture and stability parameters were derived from the new dataset. All of these components are updated on a regular basis and made available for download free of charge on the NCEI website.

Full access
Kenneth E. Kunkel, Thomas R. Karl, Michael F. Squires, Xungang Yin, Steve T. Stegall, and David R. Easterling

Abstract

Trends of extreme precipitation (EP) using various combinations of average return intervals (ARIs) of 1, 2, 5, 10, and 20 years with durations of 1, 2, 5, 10, 20, and 30 days were calculated regionally across the contiguous United States. Changes in the sign of the trend of EP vary by region as well as by ARI and duration, despite the statistically significant upward trends for all combinations of EP thresholds when area averaged across the contiguous United States. Spatially, there is a pronounced east-to-west gradient in the trends of the EP with strong upward trends east of the Rocky Mountains. In general, upward trends are larger and more significant for longer ARIs, but the contribution to the trend in total seasonal and annual precipitation is significantly larger for shorter ARIs because they occur more frequently. Across much of the contiguous United States, upward trends of warm-season EP are substantially larger than those for the cold season and have a substantially greater effect on the annual trend in total precipitation. This result occurs even in areas where the total precipitation is nearly evenly divided between the cold and warm seasons. When compared with short-duration events, long-duration events—for example, 30 days—contribute the most to annual trends. Coincident statistically significant upward trends of EP and precipitable water (PW) occur in many regions, especially during the warm season. Increases in PW are likely to be one of several factors responsible for the increase in EP (and average total precipitation) observed in many areas across the contiguous United States.

Open access
Anthony Arguez, Imke Durre, Scott Applequist, Russell S. Vose, Michael F. Squires, Xungang Yin, Richard R. Heim Jr., and Timothy W. Owen

The National Oceanic and Atmospheric Administration (NOAA) released the 1981–2010 U.S. Climate Normals in July 2011, representing the latest decadal installment of this long-standing product line. Climatic averages (and other statistics) of temperature, precipitation, snowfall, and numerous derived quantities were calculated for ~9,800 stations operated by the U.S. National Weather Service (NWS). They include estimated normals, or “quasi normals,” for approximately 2,000 active short-record stations such as those in the U.S. Climate Reference Network. The 1981–2010 installment features several new products and methodological enhancements: 1) state-of-the-art temperature homogenization at the monthly scale, 2) extensive utilization of quality-controlled daily climate data, 3) new statistical approaches for calculating daily temperature normals and heating and cooling degree days, and 4) a comprehensive suite of precipitation, snowfall, and snow depth statistics. This paper provides a general overview of this new suite of climate normals products.

Full access
Boyin Huang, Matthew J. Menne, Tim Boyer, Eric Freeman, Byron E. Gleason, Jay H. Lawrimore, Chunying Liu, J. Jared Rennie, Carl J. Schreck III, Fengying Sun, Russell Vose, Claude N. Williams, Xungang Yin, and Huai-Min Zhang

Abstract

This analysis estimates uncertainty in the NOAA global surface temperature (GST) version 5 (NOAAGlobalTemp v5) product, which consists of sea surface temperature (SST) from the Extended Reconstructed SST version 5 (ERSSTv5) and land surface air temperature (LSAT) from the Global Historical Climatology Network monthly version 4 (GHCNm v4). Total uncertainty in SST and LSAT consists of parametric and reconstruction uncertainties. The parametric uncertainty represents the dependence of SST/LSAT reconstructions on selecting 28 (6) internal parameters of SST (LSAT), and is estimated by a 1000-member ensemble from 1854 to 2016. The reconstruction uncertainty represents the residual error of using a limited number of 140 (65) modes for SST (LSAT). Uncertainty is quantified at the global scale as well as the local grid scale. Uncertainties in SST and LSAT at the local grid scale are larger in the earlier period (1880s–1910s) and during the two world wars due to sparse observations, then decrease in the modern period (1950s–2010s) due to increased data coverage. Uncertainties in SST and LSAT at the global scale are much smaller than those at the local grid scale due to error cancellations by averaging. Uncertainties are smaller in SST than in LSAT due to smaller SST variabilities. Comparisons show that GST and its uncertainty in NOAAGlobalTemp v5 are comparable to those in other internationally recognized GST products. The differences between NOAAGlobalTemp v5 and other GST products are within their uncertainties at the 95% confidence level.

Open access