Search Results
You are looking at 1 - 10 of 65 items for
- Author or Editor: George J. Huffman x
- Refine by Access: All Content x
Abstract
The random errors contained in a finite set E of precipitation estimates result from both finite sampling and measurement–algorithm effects. The expected root-mean-square random error associated with the estimated average precipitation in E is shown to be σ r = r̄[(H − p)/pN I ]1/2, where r̄ is the space–time-average precipitation estimate over E, H is a function of the shape of the probability distribution of precipitation (the nondimensional second moment), p is the frequency of nonzero precipitation in E, and N I is the number of independent samples in E. All of these quantities are variables of the space–time-average dataset. In practice H is nearly constant and close to the value 1.5 over most of the globe. An approximate form of σ r is derived that accommodates the limitations of typical monthly datasets, then it is applied to the microwave, infrared, and gauge precipitation monthly datasets from the Global Precipitation Climatology Project. As an aid to visualizing differences in σ r for various datasets, a “quality index” is introduced. Calibration in a few locations with dense gauge networks reveals that the approximate form is a reasonable first step in estimating σ r .
Abstract
The random errors contained in a finite set E of precipitation estimates result from both finite sampling and measurement–algorithm effects. The expected root-mean-square random error associated with the estimated average precipitation in E is shown to be σ r = r̄[(H − p)/pN I ]1/2, where r̄ is the space–time-average precipitation estimate over E, H is a function of the shape of the probability distribution of precipitation (the nondimensional second moment), p is the frequency of nonzero precipitation in E, and N I is the number of independent samples in E. All of these quantities are variables of the space–time-average dataset. In practice H is nearly constant and close to the value 1.5 over most of the globe. An approximate form of σ r is derived that accommodates the limitations of typical monthly datasets, then it is applied to the microwave, infrared, and gauge precipitation monthly datasets from the Global Precipitation Climatology Project. As an aid to visualizing differences in σ r for various datasets, a “quality index” is introduced. Calibration in a few locations with dense gauge networks reveals that the approximate form is a reasonable first step in estimating σ r .
The Department of Meteorology at the University of Maryland is developing one of the first computer systems in meteorology to take advantage of the new networked computer architecture that has been made possible by recent advances in computer and communication technology. Elements of the department's system include scientific workstations, local mainframe computers, remote mainframe computers, local-area networks, “long-haul” computer-to-computer communications, and “receive-only” communications. Some background is provided, together with highlights of some lessons that were learned in carrying out the design. In agreement with work in the Unidata Project, this work shows that the networked computer architecture discussed here presents a new style of resources for solving problems that arise in meteorological research and education.
The Department of Meteorology at the University of Maryland is developing one of the first computer systems in meteorology to take advantage of the new networked computer architecture that has been made possible by recent advances in computer and communication technology. Elements of the department's system include scientific workstations, local mainframe computers, remote mainframe computers, local-area networks, “long-haul” computer-to-computer communications, and “receive-only” communications. Some background is provided, together with highlights of some lessons that were learned in carrying out the design. In agreement with work in the Unidata Project, this work shows that the networked computer architecture discussed here presents a new style of resources for solving problems that arise in meteorological research and education.
Abstract
This paper addresses the following open question: What set of error metrics for satellite rainfall data can advance the hydrologic application of new-generation, high-resolution rainfall products over land? The authors’ primary aim is to initiate a framework for building metrics that are mutually interpretable by hydrologists (users) and algorithm developers (data producers) and to provide more insightful information on the quality of the satellite estimates. In addition, hydrologists can use the framework to develop a space–time error model for simulating stochastic realizations of satellite estimates for quantification of the implication on hydrologic simulation uncertainty. First, the authors conceptualize the error metrics in three general dimensions: 1) spatial (how does the error vary in space?); 2) retrieval (how “off” is each rainfall estimate from the true value over rainy areas?); and 3) temporal (how does the error vary in time?). They suggest formulations for error metrics specific to each dimension, in addition to ones that are already widely used by the community. They then investigate the behavior of these metrics as a function of spatial scale ranging from 0.04° to 1.0° for the Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN) geostationary infrared-based algorithm. It is observed that moving to finer space–time scales for satellite rainfall estimation requires explicitly probabilistic measures that are mathematically amenable to space–time stochastic simulation of satellite rainfall data. The probability of detection of rain as a function of ground validation rainfall magnitude is found to be most sensitive to scale followed by the correlation length for detection of rain. Conventional metrics such as the correlation coefficient, frequency bias, false alarm ratio, and equitable threat score are found to be modestly sensitive to scales smaller than 0.24° latitude/longitude. Error metrics that account for an algorithm’s ability to capture rainfall intermittency as a function of space appear useful in identifying the useful spatial scales of application for the hydrologist. It is shown that metrics evolving from the proposed conceptual framework can identify seasonal and regional differences in reliability of four global satellite rainfall products over the United States more clearly than conventional metrics. The proposed framework for building such error metrics can lay a foundation for better interaction between the data-producing community and hydrologists in shaping the new generation of satellite-based, high-resolution rainfall products, including those being developed for the planned Global Precipitation Measurement (GPM) mission.
Abstract
This paper addresses the following open question: What set of error metrics for satellite rainfall data can advance the hydrologic application of new-generation, high-resolution rainfall products over land? The authors’ primary aim is to initiate a framework for building metrics that are mutually interpretable by hydrologists (users) and algorithm developers (data producers) and to provide more insightful information on the quality of the satellite estimates. In addition, hydrologists can use the framework to develop a space–time error model for simulating stochastic realizations of satellite estimates for quantification of the implication on hydrologic simulation uncertainty. First, the authors conceptualize the error metrics in three general dimensions: 1) spatial (how does the error vary in space?); 2) retrieval (how “off” is each rainfall estimate from the true value over rainy areas?); and 3) temporal (how does the error vary in time?). They suggest formulations for error metrics specific to each dimension, in addition to ones that are already widely used by the community. They then investigate the behavior of these metrics as a function of spatial scale ranging from 0.04° to 1.0° for the Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks (PERSIANN) geostationary infrared-based algorithm. It is observed that moving to finer space–time scales for satellite rainfall estimation requires explicitly probabilistic measures that are mathematically amenable to space–time stochastic simulation of satellite rainfall data. The probability of detection of rain as a function of ground validation rainfall magnitude is found to be most sensitive to scale followed by the correlation length for detection of rain. Conventional metrics such as the correlation coefficient, frequency bias, false alarm ratio, and equitable threat score are found to be modestly sensitive to scales smaller than 0.24° latitude/longitude. Error metrics that account for an algorithm’s ability to capture rainfall intermittency as a function of space appear useful in identifying the useful spatial scales of application for the hydrologist. It is shown that metrics evolving from the proposed conceptual framework can identify seasonal and regional differences in reliability of four global satellite rainfall products over the United States more clearly than conventional metrics. The proposed framework for building such error metrics can lay a foundation for better interaction between the data-producing community and hydrologists in shaping the new generation of satellite-based, high-resolution rainfall products, including those being developed for the planned Global Precipitation Measurement (GPM) mission.
No abstract available.
No abstract available.
Abstract
A cumulus cloud's size, shape and internal properties can be predicted, provided that the rate of entrainment is determined by a suitable entrainment parameterization theory. A cumulus cloud model based on such a theory is analogous to the mixed-layer models of the planetary boundary layer (PBL) and the upper ocean.
The entrainment rate is closely related to turbulent transport near the cloud boundary. The mixing-length theory suggested by Asai and Kasahara (1967) is examined in this light. An alternative theory is suggested, which completely removes the strong scale-dependence of the Asai-Kasahara model. Scale-dependence is reintroduced by including the perturbation pressure term of the equation of vertical motion.
For a given sounding, the new model predicts deeper clouds than the Asai-Kasahara model. This results both from the entrainment assumption used, and from the effects of the perturbation pressure.
The expected cloud-top entrainment rate is zero for the simple model considered, although finite-difference errors lead to a positive cloud-top entrainment rate in actual simulators. Lateral entrainment nevertheless dominates cloud-top entrainment. The need for a realistic parameterization of cloud-top entrainment is noted.
The fractional entrainment rate for updrafts is shown to vary only slightly with height, and to decrease only slowly as the cloud radius increases. The fractional detrainment rate for updrafts increases with height. Downdrafts are found to entrain heavily near the PBL top, and to detrain primarily into the PBL, in agreement with the observations of Betts (1976).
Abstract
A cumulus cloud's size, shape and internal properties can be predicted, provided that the rate of entrainment is determined by a suitable entrainment parameterization theory. A cumulus cloud model based on such a theory is analogous to the mixed-layer models of the planetary boundary layer (PBL) and the upper ocean.
The entrainment rate is closely related to turbulent transport near the cloud boundary. The mixing-length theory suggested by Asai and Kasahara (1967) is examined in this light. An alternative theory is suggested, which completely removes the strong scale-dependence of the Asai-Kasahara model. Scale-dependence is reintroduced by including the perturbation pressure term of the equation of vertical motion.
For a given sounding, the new model predicts deeper clouds than the Asai-Kasahara model. This results both from the entrainment assumption used, and from the effects of the perturbation pressure.
The expected cloud-top entrainment rate is zero for the simple model considered, although finite-difference errors lead to a positive cloud-top entrainment rate in actual simulators. Lateral entrainment nevertheless dominates cloud-top entrainment. The need for a realistic parameterization of cloud-top entrainment is noted.
The fractional entrainment rate for updrafts is shown to vary only slightly with height, and to decrease only slowly as the cloud radius increases. The fractional detrainment rate for updrafts increases with height. Downdrafts are found to entrain heavily near the PBL top, and to detrain primarily into the PBL, in agreement with the observations of Betts (1976).
Abstract
A scheme developed at the University of Maryland is described to illustrate the high-level design decisions which must be made in order to fully utilize the asynchronous character-oriented data transmitted over the Domestic Data Service. We list the major functions which nearly real-time meteorological-data systems must perform at local sites, and then emphasize solutions to the high-level design problems implicit in these functions, including processing sequence, data quality control, on-line data file management, off-line data archive content, and archive format.
Abstract
A scheme developed at the University of Maryland is described to illustrate the high-level design decisions which must be made in order to fully utilize the asynchronous character-oriented data transmitted over the Domestic Data Service. We list the major functions which nearly real-time meteorological-data systems must perform at local sites, and then emphasize solutions to the high-level design problems implicit in these functions, including processing sequence, data quality control, on-line data file management, off-line data archive content, and archive format.
Abstract
Hydrologists and other users need to know the uncertainty of the satellite rainfall datasets across the range of time–space scales over the whole domain of the dataset. Here, “uncertainty” refers to the general concept of the “deviation” of an estimate from the reference (or ground truth) where the deviation may be defined in multiple ways. This uncertainty information can provide insight to the user on the realistic limits of utility, such as hydrologic predictability, which can be achieved with these satellite rainfall datasets. However, satellite rainfall uncertainty estimation requires ground validation (GV) precipitation data. On the other hand, satellite data will be most useful over regions that lack GV data, for example developing countries. This paper addresses the open issues for developing an appropriate uncertainty transfer scheme that can routinely estimate various uncertainty metrics across the globe by leveraging a combination of spatially dense GV data and temporally sparse surrogate (or proxy) GV data, such as the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar and the Global Precipitation Measurement (GPM) mission dual-frequency precipitation radar. The TRMM Multisatellite Precipitation Analysis (TMPA) products over the United States spanning a record of 6 yr are used as a representative example of satellite rainfall. It is shown that there exists a quantifiable spatial structure in the uncertainty of satellite data for spatial interpolation. Probabilistic analysis of sampling offered by the existing constellation of passive microwave sensors indicate that transfer of uncertainty for hydrologic applications may be effective at daily time scales or higher during the GPM era. Finally, a commonly used spatial interpolation technique (kriging), which leverages the spatial correlation of estimation uncertainty, is assessed at climatologic, seasonal, monthly, and weekly time scales. It is found that the effectiveness of kriging is sensitive to the type of uncertainty metric, time scale of transfer, and the density of GV data within the transfer domain. Transfer accuracy is lowest at weekly time scales with the error doubling from monthly to weekly. However, at very low GV data density (<20% of the domain), the transfer accuracy is too low to show any distinction as a function of the time scale of transfer.
Abstract
Hydrologists and other users need to know the uncertainty of the satellite rainfall datasets across the range of time–space scales over the whole domain of the dataset. Here, “uncertainty” refers to the general concept of the “deviation” of an estimate from the reference (or ground truth) where the deviation may be defined in multiple ways. This uncertainty information can provide insight to the user on the realistic limits of utility, such as hydrologic predictability, which can be achieved with these satellite rainfall datasets. However, satellite rainfall uncertainty estimation requires ground validation (GV) precipitation data. On the other hand, satellite data will be most useful over regions that lack GV data, for example developing countries. This paper addresses the open issues for developing an appropriate uncertainty transfer scheme that can routinely estimate various uncertainty metrics across the globe by leveraging a combination of spatially dense GV data and temporally sparse surrogate (or proxy) GV data, such as the Tropical Rainfall Measuring Mission (TRMM) Precipitation Radar and the Global Precipitation Measurement (GPM) mission dual-frequency precipitation radar. The TRMM Multisatellite Precipitation Analysis (TMPA) products over the United States spanning a record of 6 yr are used as a representative example of satellite rainfall. It is shown that there exists a quantifiable spatial structure in the uncertainty of satellite data for spatial interpolation. Probabilistic analysis of sampling offered by the existing constellation of passive microwave sensors indicate that transfer of uncertainty for hydrologic applications may be effective at daily time scales or higher during the GPM era. Finally, a commonly used spatial interpolation technique (kriging), which leverages the spatial correlation of estimation uncertainty, is assessed at climatologic, seasonal, monthly, and weekly time scales. It is found that the effectiveness of kriging is sensitive to the type of uncertainty metric, time scale of transfer, and the density of GV data within the transfer domain. Transfer accuracy is lowest at weekly time scales with the error doubling from monthly to weekly. However, at very low GV data density (<20% of the domain), the transfer accuracy is too low to show any distinction as a function of the time scale of transfer.
Abstract
Observations show that cumulus clouds often occur in long-lived mesoscale groups, or clumps. Five possible explanations of clumping are surveyed. The “mutual protection hypothesis,” that clumps occur because cumulus clouds create and maintain, in their near environments, relatively favorable conditions for the development of succeeding clouds, is examined at length. This idea is tested through the use of a simple time-dependent model in which clouds, triggered at randomly selected locations, tend to stabilize their environment in the face of a prescribed constant forcing. Results show that clumping occurs when the cloud-induced stabilization rate is strongest at an intermediate distance from a cloud, and that it does not occur when the stabilization rate decreases monotonically away from a cloud.
Abstract
Observations show that cumulus clouds often occur in long-lived mesoscale groups, or clumps. Five possible explanations of clumping are surveyed. The “mutual protection hypothesis,” that clumps occur because cumulus clouds create and maintain, in their near environments, relatively favorable conditions for the development of succeeding clouds, is examined at length. This idea is tested through the use of a simple time-dependent model in which clouds, triggered at randomly selected locations, tend to stabilize their environment in the face of a prescribed constant forcing. Results show that clumping occurs when the cloud-induced stabilization rate is strongest at an intermediate distance from a cloud, and that it does not occur when the stabilization rate decreases monotonically away from a cloud.
Abstract
This paper improves upon an existing extreme precipitation monitoring system that is based on the Tropical Rainfall Measuring Mission (TRMM) daily product (3B42) using new statistical models. The proposed system utilizes a regional modeling approach in which data from similar locations are pooled to increase the quality of the resulting model parameter estimates to compensate for the short data record. The regional analysis is divided into two stages. First, the region defined by the TRMM measurements is partitioned into approximately 28 000 nonoverlapping clusters using a recursive k-means clustering scheme. Next, a statistical model is used characterize the extreme precipitation events occurring in each cluster. Instead of applying the block maxima approach used in the existing system, in which the generalized extreme value probability distribution is fit to the annual precipitation maxima at each site separately, the present work adopts the peak-over-threshold method of classifying points as extreme if they exceed a prespecified threshold. Theoretical considerations motivate using the point process framework for modeling extremes. The fitted parameters are used to estimate trends and to construct simple and intuitive average recurrence interval (ARI) maps that reveal how rare a particular precipitation event is. This information could be used by policy makers for disaster monitoring and prevention. The new method eliminates much of the noise that was produced by the existing models because of a short data record, producing more reasonable ARI maps when compared with NOAA’s long-term Climate Prediction Center ground-based observations. Furthermore, the proposed method can be applied to other extreme climate records.
Abstract
This paper improves upon an existing extreme precipitation monitoring system that is based on the Tropical Rainfall Measuring Mission (TRMM) daily product (3B42) using new statistical models. The proposed system utilizes a regional modeling approach in which data from similar locations are pooled to increase the quality of the resulting model parameter estimates to compensate for the short data record. The regional analysis is divided into two stages. First, the region defined by the TRMM measurements is partitioned into approximately 28 000 nonoverlapping clusters using a recursive k-means clustering scheme. Next, a statistical model is used characterize the extreme precipitation events occurring in each cluster. Instead of applying the block maxima approach used in the existing system, in which the generalized extreme value probability distribution is fit to the annual precipitation maxima at each site separately, the present work adopts the peak-over-threshold method of classifying points as extreme if they exceed a prespecified threshold. Theoretical considerations motivate using the point process framework for modeling extremes. The fitted parameters are used to estimate trends and to construct simple and intuitive average recurrence interval (ARI) maps that reveal how rare a particular precipitation event is. This information could be used by policy makers for disaster monitoring and prevention. The new method eliminates much of the noise that was produced by the existing models because of a short data record, producing more reasonable ARI maps when compared with NOAA’s long-term Climate Prediction Center ground-based observations. Furthermore, the proposed method can be applied to other extreme climate records.
Abstract
About 30% of freezing precipitation cases are observed to occur in a subfreezing atmosphere (contrary to the classical melting ice model). We explain these cases with the concept of the “supercolled warm rain process” (SWRP): the warm rain process can yield liquid hydrometeors at subfreezing temperatures whenever too few ice nuclei are available to create solid hydrometeors. We find that all of the freezing precipitation cases in a subfreezing atmosphere show a rapid decrease of moisture content in the zone above the inferred cloud top (decreasing from liquid to ice saturation in lm than 20 mb), at temperatures ranging from 0° to −10°C. Additionally, this structure prevails among freezing cases (43%) much more than among solid or liquid cases (10% and 15%, respectivelly).
Regression experiments demonstrated that freezing precipitation was best described when (discretized) predictors were combined to describe particular physical processes, such as the SWRP. Besides the SWRP, the usual melting ice process variables, such as cold-layer size, and surface temperature are important for specifying freezing precipitation. The system handles liquid and solid precipitation adequately, but is deficient in specifying freezing cases. This last result is in agreement with previous studies, and reflects the small sample size due to the rarity of freezing cases.
Abstract
About 30% of freezing precipitation cases are observed to occur in a subfreezing atmosphere (contrary to the classical melting ice model). We explain these cases with the concept of the “supercolled warm rain process” (SWRP): the warm rain process can yield liquid hydrometeors at subfreezing temperatures whenever too few ice nuclei are available to create solid hydrometeors. We find that all of the freezing precipitation cases in a subfreezing atmosphere show a rapid decrease of moisture content in the zone above the inferred cloud top (decreasing from liquid to ice saturation in lm than 20 mb), at temperatures ranging from 0° to −10°C. Additionally, this structure prevails among freezing cases (43%) much more than among solid or liquid cases (10% and 15%, respectivelly).
Regression experiments demonstrated that freezing precipitation was best described when (discretized) predictors were combined to describe particular physical processes, such as the SWRP. Besides the SWRP, the usual melting ice process variables, such as cold-layer size, and surface temperature are important for specifying freezing precipitation. The system handles liquid and solid precipitation adequately, but is deficient in specifying freezing cases. This last result is in agreement with previous studies, and reflects the small sample size due to the rarity of freezing cases.