Search Results
1. Introduction Modeling rainfall structures in space and time is necessary to better understand soil erosion, runoff, and pollutant transport processes at the watershed level ( Baigorria and Romero 2007 ; Keener et al. 2007 ; Romero et al. 2007 ; Zhang and Garbrecht 2003 ). Simulating dry spell distributions ( Baigorria et al. 2007b ; Shin et al. 2010 ) and irrigation requirements in large areas ( Romero et al. 2009 ) also has spatial implications in planning water distribution among
1. Introduction Modeling rainfall structures in space and time is necessary to better understand soil erosion, runoff, and pollutant transport processes at the watershed level ( Baigorria and Romero 2007 ; Keener et al. 2007 ; Romero et al. 2007 ; Zhang and Garbrecht 2003 ). Simulating dry spell distributions ( Baigorria et al. 2007b ; Shin et al. 2010 ) and irrigation requirements in large areas ( Romero et al. 2009 ) also has spatial implications in planning water distribution among
. If the forecasts are reliable, then a graph of these probabilities will show equal values for all bins. The probability integral transform ( Dawid 1984 ) is a similar procedure suitable for situations in which the forecast is presented as a continuous probability distribution function. A similar histogram is drawn, but the bins are based on quantiles of the distribution rather than on the ordered ensemble members. Both the ranked histogram and the probability integral transform can be presented
. If the forecasts are reliable, then a graph of these probabilities will show equal values for all bins. The probability integral transform ( Dawid 1984 ) is a similar procedure suitable for situations in which the forecast is presented as a continuous probability distribution function. A similar histogram is drawn, but the bins are based on quantiles of the distribution rather than on the ordered ensemble members. Both the ranked histogram and the probability integral transform can be presented
, so that the k th ensemble member is the k /( K + 1) quantile of Γ, that is, the unique solution of the equation Γ( x ) = k /( K + 1) for x . It needs to be assumed that Γ is strictly monotonically increasing for the ensemble members to be well defined. The probability forecast Γ is reliable if Together with a distribution for Γ (which is unimportant in the present context) these determinations completely specify a probabilistic model for ( Y , X 1 , … , X K , Γ). The reliability
, so that the k th ensemble member is the k /( K + 1) quantile of Γ, that is, the unique solution of the equation Γ( x ) = k /( K + 1) for x . It needs to be assumed that Γ is strictly monotonically increasing for the ensemble members to be well defined. The probability forecast Γ is reliable if Together with a distribution for Γ (which is unimportant in the present context) these determinations completely specify a probabilistic model for ( Y , X 1 , … , X K , Γ). The reliability
Using Eqs. (12) and (13) , the parameters of the beta distribution that defines g ( p ) can be determined in terms of the Brier skill score and the climatological probability of the event: Thus to calculate a normative relative value score, the parameters one needs to specify are z , p c , and BSS. These three parameters capture the details of the decision problem, the environment, and the forecast performance, respectively. 3. A behavioral model: Learning from experience The
Using Eqs. (12) and (13) , the parameters of the beta distribution that defines g ( p ) can be determined in terms of the Brier skill score and the climatological probability of the event: Thus to calculate a normative relative value score, the parameters one needs to specify are z , p c , and BSS. These three parameters capture the details of the decision problem, the environment, and the forecast performance, respectively. 3. A behavioral model: Learning from experience The
) , and Kleeman (2002) . Leung and North (1990) used information–theoretical measures like entropy and transinformation in relation to predictability. Kleeman (2002) proposed to use the relative entropy between the climatic and the forecast distribution to measure predictability. The applications of information theory in the framework of predictability are mostly concerned with modeled distributions of states and how uncertainty evolves over time. Forecast verification, however, is concerned
) , and Kleeman (2002) . Leung and North (1990) used information–theoretical measures like entropy and transinformation in relation to predictability. Kleeman (2002) proposed to use the relative entropy between the climatic and the forecast distribution to measure predictability. The applications of information theory in the framework of predictability are mostly concerned with modeled distributions of states and how uncertainty evolves over time. Forecast verification, however, is concerned
arises in the computation of fluxes spatially averaged over some domain [e.g., a general circulation model (GCM) grid box]: in general, there is a difference between the area-averaged mean wind speed and the magnitude of the area-averaged mean vector wind (e.g., Mahrt and Sun 1995 ). Accurate computations of (space or time) average fluxes require the development of models of the probability distribution of sea surface winds. A new era in the study of sea surface winds was ushered in with the
arises in the computation of fluxes spatially averaged over some domain [e.g., a general circulation model (GCM) grid box]: in general, there is a difference between the area-averaged mean wind speed and the magnitude of the area-averaged mean vector wind (e.g., Mahrt and Sun 1995 ). Accurate computations of (space or time) average fluxes require the development of models of the probability distribution of sea surface winds. A new era in the study of sea surface winds was ushered in with the
rigorous dynamic decision making. The value of dynamic decision models for meteorological problems has been demonstrated by a number of studies (e.g., Katz and Murphy 1982 ; Murphy et al. 1985 ; Epstein and Murphy 1988 ; Wilks 1991 ; Katz 1993 ; Regnier and Harr 2006 ). However, a sequence of lagged NWP probability forecasts is a medium for dynamic decision making that has received little attention [only Regnier and Harr (2006) and McLay (2008) analyze decision making specific to this
rigorous dynamic decision making. The value of dynamic decision models for meteorological problems has been demonstrated by a number of studies (e.g., Katz and Murphy 1982 ; Murphy et al. 1985 ; Epstein and Murphy 1988 ; Wilks 1991 ; Katz 1993 ; Regnier and Harr 2006 ). However, a sequence of lagged NWP probability forecasts is a medium for dynamic decision making that has received little attention [only Regnier and Harr (2006) and McLay (2008) analyze decision making specific to this
mesoscale convective systems may span the filter scale of the forecast model ( Shutts 2005 ). Shutts and Palmer (2007) applied a coarse-graining methodology to cloud-resolving model simulations of deep convection and were able to characterize the dependence of the probability distribution function of convective warming on the strength of convective forcing. The coarse-grained momentum forcing has also been determined and used to compute an effective streamfunction forcing. Building on this idea and
mesoscale convective systems may span the filter scale of the forecast model ( Shutts 2005 ). Shutts and Palmer (2007) applied a coarse-graining methodology to cloud-resolving model simulations of deep convection and were able to characterize the dependence of the probability distribution function of convective warming on the strength of convective forcing. The coarse-grained momentum forcing has also been determined and used to compute an effective streamfunction forcing. Building on this idea and
an example, probability theory is central to the distribution-oriented (DO) approach to verification ( Murphy and Winkler 1987 ; Murphy 1997 ). Forecasts and observations are treated as random variables. Verification measures are defined based on the joint distribution of the forecasts and observations. We will employ a similar approach throughout this paper to define aspects of forecast quality. Consider an ensemble prediction system that produces a forecast of some continuous random variable
an example, probability theory is central to the distribution-oriented (DO) approach to verification ( Murphy and Winkler 1987 ; Murphy 1997 ). Forecasts and observations are treated as random variables. Verification measures are defined based on the joint distribution of the forecasts and observations. We will employ a similar approach throughout this paper to define aspects of forecast quality. Consider an ensemble prediction system that produces a forecast of some continuous random variable
predictions ( Toth et al. 2006 ). On a fundamental level, all the forecast verification procedures involve the investigation of the properties of the joint distribution of forecasts and observations ( Wilks 2006 ). However, there can be many aspects of the model performance and differing views of what constitutes a good forecast (e.g: Murphy 1993 ; Wilks 2006 ). As a consequence, a broad range of verification metrics are usually needed to analyze and compare forecast quality. To be useful for
predictions ( Toth et al. 2006 ). On a fundamental level, all the forecast verification procedures involve the investigation of the properties of the joint distribution of forecasts and observations ( Wilks 2006 ). However, there can be many aspects of the model performance and differing views of what constitutes a good forecast (e.g: Murphy 1993 ; Wilks 2006 ). As a consequence, a broad range of verification metrics are usually needed to analyze and compare forecast quality. To be useful for