Search Results

You are looking at 1 - 10 of 192 items for :

  • Regression analysis x
  • Bulletin of the American Meteorological Society x
  • All content x
Clear All
Kelly Helm Smith, Andrew J. Tyre, Zhenghong Tang, Michael J. Hayes, and F. Adnan Akyuz

from long-term exposure ( Kam et al. 2019 ), or other influences. We use regression analysis to predict the number of #drought tweets (the dependent variable) for each state-week (the unit of analysis) using four independent variables: drought status on the U.S. Drought Monitor, news about drought, and population, as well as an estimated variable, states’ propensity to tweet about drought. We identify higher-than-expected number of tweets that are not accounted for by either drought status or news

Full access
Ambarish Vaidyanathan, Scott R. Kegler, Shubhayu S. Saha, and James A. Mulholland

scheme for evaluating definitions of extreme weather events, within the context of adverse health outcomes with clear causal links to exposures characterized by such definitions. The framework, applied here to the evaluation of EHE definitions, employs cluster analysis to identify homogeneous groupings of event definitions followed by rate regression modeling to estimate the effects for representatives from these groupings. It provides a cohesive approach to identifying those definitions (and their

Full access
Byung-Ju Sohn and Franklin R. Robertson

Despite the general agreement that clouds cool the earth–atmosphere, there are substantial differences in estimated magnitudes of the annual global mean of cloud radiative forcing. Recent estimates of globally averaged net cloud radiative forcing range from −2 to −27 W m−2. The reasons for these differences have not been clarified in spite of the important role of clouds in maintaining global heat balance. Here, three estimation methods [Earth Radiation Budget Experiment (ERBE), Regression I, and Regression II] are compared using the same data source and analysis period.

Intercomparison has been done for the time period of February and March 1985 over which major satellite radiation budget and cloudiness datasets (ERBE radiation budget, Nimbus-7, and ISCCP cloudiness) are contemporaneous. The global averages of five sets of net cloud radiative forcing by three independent methods agree to within 3.5 W m−2; four of five cases agree to within 1 W m−2. This suggests that differences in published global mean values of net cloud radiative forcing are mainly due to different data sources and analysis periods and a best estimated annual mean among all previous estimates appears to be the ERBE measurement, that is, −17.3 W m−2. In contrast to the close agreement in the net cloud radiative forcing estimates, both longwave and shortwave cloud radiative forcing show more dependence on the chosen method and dataset. The bias of regression-retrieved values between Nimbus-7 and ISCCP cloud climatology is largely attributed to the difference in total cloudiness between two climatologies whereas the discrepancies between the ERBE and regression method appear to be, in part, due to the conceptually different definition of clear-sky flux.

Full access
William W. Hsieh and Benyang Tang

Empirical or statistical methods have been introduced into meteorology and oceanography in four distinct stages: 1) linear regression (and correlation), 2) principal component analysis (PCA), 3) canonical correlation analysis, and recently 4) neural network (NN) models. Despite the great popularity of the NN models in many fields, there are three obstacles to adapting the NN method to meteorology–oceanography, especially in large-scale, low-frequency studies: (a) nonlinear instability with short data records, (b) large spatial data fields, and (c) difficulties in interpreting the nonlinear NN results. Recent research shows that these three obstacles can be overcome. For obstacle (a), ensemble averaging was found to be effective in controlling nonlinear instability. For (b), the PCA method was used as a prefilter for compressing the large spatial data fields. For (c), the mysterious hidden layer could be given a phase space interpretation, and spectral analysis aided in understanding the nonlinear NN relations. With these and future improvements, the nonlinear NN method is evolving to a versatile and powerful technique capable of augmenting traditional linear statistical methods in data analysis and forecasting; for example, the NN method has been used for El Niño prediction and for nonlinear PCA. The NN model is also found to be a type of variational (adjoint) data assimilation, which allows it to be readily linked to dynamical models under adjoint data assimilation, resulting in a new class of hybrid neural–dynamical models.

Full access
Lance F. Bosart

Consensus (the average of all forecasts) skill levels in forecasting daily maximum and minimum temperature, precipitation probability across six class intervals, and precipitation amount at the State University of New York at Albany are reviewed for the period 1977–82. Skill is measured relative to a climatological control. Forecasts are made for four consecutive 24 h periods for Albany, N.Y., beginning at 1800 GMT of the current day.

For minimum temperature, the skill levels average 57%, 41%, 26%, and 15%, respectively, for 24, 48, 72, and 96 h in advance. For maximum temperature, a more limited sample yields corresponding skill levels of 84%, 49%, 34%, and 19% for 12, 36, 60, 84 h ahead. Linear regression analysis yields little in the way of a definitive trend, given the smallness of the explained variance. Comparison with other readily available objective and subjective operational guidance establishes the credibility of the consensus forecast.

Full access
Pao-Shin Chu, Zhi-Ping Yu, and Stefan Hastenrath

To detect climate change in the Amazon Basin, as possibly induced by deforestation, time series of monthly mean outgoing longwave radiation (OLR), an index of tropical convection, and monthly rainfall totals at Belem and Manaus for the past 15 years are analyzed. A systematic bias in the original OLR series was removed prior to the analysis. Linear regression analysis and nonlinear Mann-Kendall rank statistic are employed to detect trends. Over almost all of the basin, the OLR trend values are negative, indicating an increase of convection with time. The largest negative and statistically significant values are found in the western equatorial portion of Amazonia, where rainfall is most abundant. Consistent with this, the rainfall series at Belém and Manaus also feature upward trends. Small positive and statistically insignificant, OLR trend values are confined to the southern fringe of the basin, where deforestation has been most drastic. Thus, there is little indication for a rainfall increase associated with deforestation, but rather a strong signal of enhanced convection in the portion of Amazonia contributing most strongly to the total precipitation over the basin.

Full access
Andrew W. Wood and Dennis P. Lettenmaier

Streamflow forecasting is critical to water resources management in the western United States. Yet, despite the passage of almost 50 years since the development of the first computerized hydrologic simulation models and over 30 years since the development of hydrologic ensemble forecast methods, the prevalent method used for forecasting seasonal streamflow in the western United States remains the regression of spring and summer streamflow volume on spring snowpack and/or the previous winter's accumulated precipitation. A recent retrospective analysis have shown that the skill of the regression-based forecasts have not improved in the last 40 years, despite large investments in science and technology related to the monitoring and assessment of the land surface and in climate forecasting. We describe an experimental streamflow forecast system for the western United States that applies a modern macroscale land surface model (akin to those now used in numerical weather prediction and climate models) to capture hydrologic states (soil moisture and snow) at the time of forecast, incorporates data assimilation methods to improve estimates of initial state, and uses a range of climate prediction ensembles to produce ensemble forecasts of streamflow and associated hydrologic states for lead times of up to one year. The forecast system is intended to be a real-time test bed for evaluating new seasonal streamflow forecast methods. Experience with the forecast system is illustrated using results from the 2004/05 forecast season, in which an evolving drought in the Pacific Northwest diverged strikingly from extreme snow accumulations to the south. We also discuss how the forecast system relates to ongoing changes in seasonal streamflow forecast methods in the two U.S. operational agencies that have major responsibility for seasonal streamflow forecasts in the western United States.

Full access
The WASA Group

The European project WASA (Waves and Storms in the North Atlantic) has been set up to verify or disprove hypotheses of a worsening storm and wave climate in the northeast Atlantic and its adjacent seas in the present century. Its main conclusion is that the storm and wave climate in most of the northeast Atlantic and in the North Sea has undergone significant variations on timescales of decades; it has indeed roughened in recent decades, but the present intensity of the storm and wave climate seems to be comparable with that at the beginning of this century. Part of this variability is found to be related to the North Atlantic oscillation.

An analysis of a high-resolution climate change experiment, mimicking global warming due to increased greenhouse gas concentrations, results in a weak increase of storm activity and (extreme) wave heights in the Bay of Biscay and in the North Sea, while storm action and waves slightly decrease along the Norwegian coast and in most of the remaining North Atlantic area. A weak increase in storm surges in the southern and eastern part of the North Sea is expected. These projected anthropogenic changes at the time of CO2 doubling fall well within the limits of variability observed in the past.

A major methodical obstacle for the assessment of changes in the intensity of storm and wave events are inhomogeneities in the observational record, both in terms of local observations and of analyzed products (such as weather maps), which usually produce an artificial increase of extreme winds. This occurs because older analyses were based on fewer observations and with more limited conceptual and numerical models of the dynamical processes than more recent analyses. Therefore the assessment of changes in storminess is based on local observations of air pressure and high-frequency variance at tide gauges. Data of this sort is available for 100 yr and sometimes more. The assessment of changes in the wave climate is achieved using a two-step procedure; first a state-of-the-art wave model is integrated with 40 yr of wind analysis; the results are assumed to be reasonably homogeneous in the area south of 70°N and east of 20°W; then a regression is built that relates monthly mean air pressure distributions to intramonthly percentiles of wave heights at selected locations with the help of the 40-yr simulated data; finally, observed monthly mean air pressure fields from the beginning of this century are fed into the regression model to derive best guesses of wave statistics throughout the century.

Full access
Frederick Sanders

Forecasts of minimum temperature and precipitation amount at Boston have been made and evaluated in the Department of Meteorology, MIT, in essentially the same format since 1966. These forecasts refer to the first through fourth 24 h periods in advance and are partly categorical and partly probabilistic in form. The skill level in the consensus forecasts, relative to forecasts of the long-term mean, is slightly more than 50% for the first day and around 10% on the fourth, except for conditional quantitative precipitation forecasting, which is decidedly less skillful.

Regression analysis shows, except for the first day, slight increases in the skill of these predictions, at a rate of about six-tenths of a percent per year.

The skill of the guidance temperature forecasts of the National Meteorological Center has lagged the skill of the consensus forecasts by a decreasing amount from 1966 to 1972. The lag from 1972 to date has not changed significantly and varies between 10 and 30% of consensus skill, depending on range and season. The guidance for the conditional quantitative precipitation forecast is inferior to both consensus and the long-term median control forecasts.

Full access
Gilberto A. Vicente, Roderick A. Scofield, and W. Paul Menzel

This paper presents a description, sensitivity analyses, sample results, validation, and the recent progress done on the development of a new satellite rainfall estimation technique in the National Environmental Satellite Data and Information Service (NESDIS) at the National Oceanic and Atmospheric Administration (NOAA). The technique, called the auto-estimator, runs in real time for applications to flash flood forecasting, numerical modeling, and operational hydrology. The auto-estimator uses the Geoestationary Operational Environmental Satellite-8 and -9 in the infrared (IR) 10.7μm band to compute real-time precipitation amounts based on a power-law regression algorithm. This regression is derived from a statistical analysis between surface radar–derived instantaneous rainfall estimates and satellite-derived IR cloud-top temperatures collocated in time and space. The rainfall rate estimates are adjusted for different moisture regimes using the most recent fields of precipitable water and relative humidity generated by the National Centers for Environmental Prediction Eta Model. In addition, a mask is computed to restrict rain to regions satisfying two criteria: (a) the growth rate of the cloud as a function of the temperature change of the cloud tops in two consecutive IR images must be positive, and (b) the spatial gradients of the cloud-top temperature field must show distinct and isolated cold cores in the cloud-top surface. Both the growth rate and the gradient corrections are useful for locating heavy precipitation cores. The auto-estimator has been used experimentally for almost 3 yr to provide real-time instantaneous rainfall rate estimates, average hourly estimates, and 3-, 6-, and 24-h accumulations over the conterminous 48 United States and nearby ocean areas. The NOAA/NESDIS Satellite Analyses Branch (SAB) has examined the accuracy of the rainfall estimates daily for a variety of storm systems. They have determined that the algorithm produces useful 1–6-h estimates for flash flood monitoring but exaggerates the area of precipitation causing overestimation of 24-h rainfall total associated with slow-moving, cold-topped mesoscale convective systems. The SAB analyses have also shown a tendency for underestimation of rainfall rates in warm-top stratiform cloud systems. Until further improvements, the use of this technique for stratiform events should be considered with caution. The authors validate the hourly rainfall rates of the auto-estimator using gauge-adjusted radar precipitation products (with radar bias removed) in three distinct cases. Results show that the auto-estimator has modest skill at 1-h time resolution for a spatial resolution of 12 km. Results improve with larger grid sizes (48 by 48 km or larger).

Full access