## 1. Introduction

Over tropical and subtropical latitudes, a tropical cyclone is one of the most significant weather-related threats to shore- and sea-based locations. Preparations that can substantially reduce the impact of tropical cyclone conditions include evacuation, moving mobile assets such as ships and aircraft, and preparing stationary infrastructure for flooding and high winds. However, preparations can incur substantial costs. The cost of civilian evacuations is usually estimated at approximately $1M per mile of coastline evacuated (Whitehead 2003), but may be as high as $50M in some areas (Adams and Berri 2004). Baker (2002) summarizes studies of preparation costs in specific industries, which run into tens of millions of dollars. The cost of evacuating U.S. Navy ships from Norfolk, Virginia, during Hurricane Floyd (in 1999) was estimated at $14–$17M,^{1} and during Hurricane Isabel (in 2003), the direct ship sortie costs were $36M.^{2} Avoiding unnecessary preparations would therefore be very valuable.

The level of preparation for an oncoming storm depends on two factors. One is the availability and accuracy of forecasts with enough lead time to allow for appropriate preparations. Modern observation, numerical models, forecast methods, and coastal infrastructures have significantly reduced the possibility of tragedies such as the Galveston, Texas, hurricane of 1900, which caused an estimated 8000 deaths (Jarrell et al. 2001). A second factor is the decision process of the public emergency managers, property owners, military commanders, and the public who decide whether, when, and how to prepare. In the face of an approaching tropical cyclone, decisions are often made when there is still a substantial amount of uncertainty as to where—and sometimes whether—the storm will make landfall.

Between 1970 and 1998, 72-h hurricane track forecast errors declined from a 5-yr average of about 750 km to an average of about 400 km (McAdie and Lawrence 2000), and longer-range forecasts have improved enough to allow for 5-day forecasts that are as accurate as the 3-day forecasts were 15 yr ago. The addition of strike probabilities to the National Hurricane Center’s (NHC’s) forecast has given users further useful information (Jarrell and Brand 1983). Although forecast accuracy is improving, there will always be uncertainty at the lead times required for some types of preparation such as mass evacuation and moving ships to sea. Therefore, improving the decision-making process in the face of uncertainty has the potential to yield substantial value. There is considerable room for extracting more value from forecasts without further improvements in accuracy by improving decision processes.

Hurricane preparation decisions are usually examined in a static, one-time cost:loss framework, which is a simple decision-analytic approach that has been widely used to investigate the value and optimal use of weather information (for an introduction, see Katz and Murphy 1997, chapter 6). The ultimate impact of a tropical cyclone is determined by a series of decisions in which weather information and the natural variability of tropical cyclones are critical components. To more accurately represent the problem of a decision maker facing an evolving tropical cyclone, the cost:loss scenario must be extended to include multiple interrelated decisions.

This study proposes replacing the static “prepare or do not prepare” decision-making model with a dynamic “prepare or wait” model that allows for the incorporation of new information in the form of updated forecasts. Additional value can be extracted by planning dynamically for updated forecasts; however, to make these decisions optimally, decision makers require information about how the strike probability will change over time. In this paper, a Markov chain model of hurricane travel with a binary weather outcome is developed and its parameters estimated from historical tracks.^{3} Combining this storm model with a dynamic decision model for a stationary target location (which can be land or sea based), we show that the value of anticipating improving future forecasts and adjusting early storm decisions accordingly can reduce the expected total cost associated with a hurricane strike by up to 8%. For some decision makers, the value of anticipating improving forecasts is comparable to the value of reducing the lead time required for a given preparation action by 6–12 h. This added value is in addition to the value of simply reevaluating a decision not to prepare each time a new forecast becomes available.

The savings due to dynamic optimization come from a reduction in false alarms. False alarms may be even more harmful than their direct costs indicate because a high false alarm rate may reduce a decision maker’s willingness to prepare. Roulston and Smith (2004) have shown in a theoretical model that the optimal choice of an action threshold for ordering preparation is sensitive to a compliance rate that is a function of the false alarm rate in the forecast process. On the other hand, Dow and Cutter (1998) do not observe this “crying wolf” effect in their empirical study, and Baker (2002) reports that evacuation rates did not drop after two false alarms in 1985. The savings due to dynamic optimization come at the cost of delayed, and therefore more costly, evacuations in some cases. The dynamic decision method balances these costs against the benefits to achieve an expected net savings.

The value investigated in this analysis arises from the decision-making process, not from improving the forecast. A model, such as the Markov hurricane model, that allows dynamic optimization will be less skillful at forecasting than existing atmospheric models. In practice, the Markov model should not be utilized as a forecasting tool. Instead, it could be used to generate measures of information and uncertainty and their expected evolutions. These measures could serve as an “information forecast.” Such an information forecast could be used in real time, together with skillful track and probability forecasts for various weather conditions, to adjust for the value of waiting and to approximate dynamic optimization.

It is important to note that any decision process that delays preparation is only valid if the critical costs are included in the analysis. This means that if delaying a given preparation action would increase risks to life and property; the delay must be appropriately balanced with the risks associated with taking immediate action. Our decision model is conceptual, representing a single type of preparation action, such as a sortie of ships from port. It is not intended to indicate that it is ever advisable to make no preparations in the face of an approaching hurricane.

The meteorology community is increasingly able to measure and communicate the uncertainty associated with its forecasts. As this study illustrates, the probabilities of future events that are conditioned on the current best information are not a complete characterization of the uncertainty. This has been illustrated previously by Mjelde and Dixon (1993), Wilks (1991), and Epstein and Murphy (1988).

In addition to the quantitative data from observations, atmospheric models, and climatology, meteorologists have the insights gained from experience that affect their own assessments of future events. They know when they can anticipate information. For example, Atlantic hurricanes commonly travel west, and then some turn north and eventually east. The timing of the turns largely determines whether and where the storm will strike the Atlantic coast. If the storm does not turn before some critical time, the storm will strike the coast, and therefore a seemingly large amount of uncertainty about landfall will be resolved in the immediate preceding period. Meteorologists and decision makers can express this concept explicitly but in a qualitative manner. However, current uncertainty estimates are often based on historical error rates, although the NHC is developing a more sophisticated hurricane strike-probability model (Gross et al. 2004). Quantifying uncertainty in a dynamic framework can help meteorologists communicate their expertise to decision makers. It can also help to refine hurricane preparation policies to substantially reduce the average costs of preparing for storms.

Decision analysis consists of a formal structure for analyzing problems made difficult by uncertainty through modeling the uncertainty that cannot be eliminated and its consequences. The decision-analytic framework has been used extensively in the meteorological literature to estimate the value of forecasts (Leigh 1995; Wilks and Hamill 1995; Adams et al. 1995; Brown et al. 1986). A back-of-the envelope calculation of the value of hurricane forecasts in the eastern United States puts the value at about $7B per year. This assumes three landfalls per year (Powell and Aberson 2001), with 150 miles of coastline affected per storm, and $17M of damage per mile. The $17M figure is based on the assumption that NHC warning areas are cost effective up to the limits of the warning area. Assuming the landfall error is normally distributed along the coast, and about 460 miles of coastline are warned per storm (Jarrell and DeMaria 1999), and the cost of preparation is $1M per mile (Whitehead 2003), this yields an estimate of about $17M per mile of damages if the warning were not issued.

The value of information is measured based on expected cost rather than realized cost because a good decision can sometimes lead to a poor outcome. False alarms are a classic example. It may be a good decision to evacuate in advance of a hurricane, although the hurricane may never make landfall. Retrospectively, the evacuation was costly and unnecessary (a poor outcome) but the decision minimized expected cost. These concepts are used in a tropical cyclone context by Considine et al. (2004). This framework can also be used to generate insights about the value of improvements in accuracy. For example, Considine et al. showed that incremental increases in accuracy of the forecasts of tropical cyclone track and intensity would produce a significant increase in forecast value.

In the meteorological literature, decisions have often been modeled in a one-stage 2 × 2 cost:loss framework (Katz and Murphy 1997, chapter 6). This framework is easy to understand and to analyze, and may be applicable to some real-world decisions. However, it is not necessarily appropriate for every meteorology-related decision. Reverting to decision analysis fundamentals can greatly expand the range of decisions that can be considered in estimating the value of information and for structuring decision problems. In the context of hurricane forecasting, dynamic decision making creates value by reducing the frequency of false alarms, and thereby decreasing the expected total cost (cost of preparation plus loss of property or life due to hurricane strikes at an unprepared target). This additional value will increase the value of existing forecasts.

A dynamic decision includes more than one decision point, such that 1) the alternatives available at later decision points, or their consequences, depend on the decisions at earlier decision points; and 2) relevant information is received between decision points. In a meteorological context, the information that is received between decision points is usually a weather forecast. The information can also be the outcome of an earlier decision.

Several examples in the meteorological literature use dynamic decision models, mostly for the purpose of estimating the economic value of existing weather forecasts and hypothetical future forecasts. In all cases, the sequential decisions are interdependent because the consequences of each decision depend on earlier actions, but generally there are multiple weather events that are not probabilistically related, and there is only one forecast per weather event.

The major examples are the fallowing–planting problem, treated in Katz et al. (1987) and Brown et al. (1986), and the fruit–frost problem, treated by Katz et al. (1982) and Katz and Murphy (1990). Stewart et al. (1984) conducted interviews with the decision makers (orchardists) in the fruit–frost problem and found that they make their decisions on the basis of ongoing monitoring of temperatures, rather than making a static decision once each night. Therefore, each night’s decision would be more realistically modeled as a sequence of decisions. To our knowledge, Considine et al. (2004) is the only previous example using a prescriptive decision model for hurricane planning. Their decision rule and value-of-forecast estimate depend on just one forecast per storm. However, they repeat their static calculation for multiple lead times.

Mjelde and Dixon (1993) build anticipation into their model such that the decision maker anticipates that there will be a forecast, but does not know what the forecast will be. They observe that the economic value of longer lead times is overstated when the (unrealistic) assumption is made that the decision maker does not anticipate the forecasts, but acts on them when they appear. The Mjelde and Dixon framework can accommodate a stochastic dynamic model of the evolution of the climate, but they assume the climate probabilities are not conditioned on earlier information.

Wilks (1991) and Epstein and Murphy (1988) are the only prior examples, to our knowledge, in which two correlated forecasts for a single event are used in decision making. Both are based on decisions of the same cost:loss ratio structure as the multiperiod fruit–frost problem. In this structure, a loss can be incurred at most once, but the cost of protection may be incurred in each of many periods. There are multiple weather events, and up to two forecasts for each event. Epstein and Murphy use a conceptual model for forecasts of adverse weather. Wilks derives his stochastic model for the probabilistic relationship between the two forecasts and between the forecasts and the outcome of the weather event (in his model, precipitation) on historical precipitation and forecast data, and uses dynamic optimization to solve for the best decision at each time step.

Katz (1993) also uses a multiperiod version of the same cost:loss ratio problem, in which the previous period’s weather serves as a forecast. The forecast is related to the weather outcome via an autocorrelated Markov chain model of persistence. Though the persistence model would allow for the use of earlier forecasts in decision making, only one forecast for each event is used because the cost of protection is modeled as constant, and therefore there is no value to making the decision before the best—that is, last—forecast becomes available.

Murphy and Ye (1990) also model successive forecasts with increasing accuracy for a single event and investigate the trade-off between increasing cost and increasing accuracy as lead time decreases. However, they assume that the decision maker must decide ex ante the lead time at which he or she will acquire the forecast and make a decision. Only one forecast for the event is used in decision making.

The current paper is the first to stochastically model decision making with respect to a sequence of more than two forecasts with improving accuracy for a single event. The tropical cyclone context is very appropriate for this approach because in reality, public managers, military commanders, and other decision makers monitor the storm’s progress and reevaluate their decisions every time a forecast is updated.

## 2. Markov tropical cyclone model

Optimizing decisions in a dynamic context requires a complete stochastic model that describes the probability of every sequence of events and the probability of every event, conditional on each possible outcome of earlier stages.

### a. Stochastic modeling

High-resolution meteorological models based on physical laws may describe the current and future atmospheric conditions very accurately, but they do not model the uncertainty in the evolution of the atmosphere. Most methods for adding uncertainty to physical models, including ensemble forecasting, are based on simulation. In a simulation, a system’s behavior is recalculated many times, each time with a different set of values for the uncertain parameters. After many runs, there are many sets of results for the behavior of the system, and the assumption is made that the stochastic nature of the real system’s behavior, while uncertain, is approximately described by the frequency distribution of the system’s behaviors in the simulation. A Markov model contains more information about the evolution of the system and can be used as a tool to generate a simulation, but it can also be used for analytical decision making.

Markov models have been used in decision making, in prediction (especially as a way to model persistence or other time dependence), and in developing probability forecasts, especially for precipitation (see references in Wilks 1995, p. 296). Wilks (1995, chapter 8) describes Markov chains and Markov processes that have continuous state variables, and Katz (Murphy and Katz 1985, chapter 7) summarizes some applications of Markov chains in a meteorological context. Readers interested in a deeper treatment of decision making on the basis of Markov models are directed to Puterman (1994).

In a Markov model, the properties of a system at any time are completely described by its state at that time and the system’s future evolution is described by random transitions among these states. The key feature of a Markov model is the memoryless property, by which the evolution of the system (here, the hurricane) depends on its history only via its current state. Therefore, according to a Markov model, two hurricanes with the same atmospheric conditions have the same probabilistic future. This is the property that makes Markov models more analytically tractable than complex physical models. In some sense, atmospheric dynamics are naturally Markovian. In physical atmospheric models, the future evolution of the atmospheric variables is completely determined by their values at any given time. These values may not be known with certainty, and even the most detailed physical models are necessarily a simplification of the physical state of the atmosphere. However, the temperature, pressure, and other parameters of every element in the model drive the future evolution of the conditions of the atmosphere, but would compose too many state variables for a Markov model. The rate of change of atmospheric parameters may be necessary to model the atmosphere usefully, and can be included as state variables in a Markov model. As compared with physical atmospheric models and even one based on climatology and persistence (e.g., CLIPER), Markov models are limited in the level of complexity and detail they can describe, as well as in their forecast accuracy; however, they contain much more probabilistic information.

### b. States of the hurricane

We use a discrete-time Markov chain model of the hurricane evolution. In the terminology of Wilks (1995) and Katz (1987), this is a first-order, multistate Markov chain. The state of a hurricane is defined by the location of its center and its direction of travel. The motion is modeled according to transitions among these states, which occur at 6-h time steps. Specifically, the hurricane location is the 1° latitude × 1° longitude cell containing the hurricane center within the region 0°–70°N and 0°–100°W. For hurricanes in the region 10°–25°N and 55°–80°W, the direction of travel is also defined, because in this region direction changes are critical. For example, whether a hurricane recurves or not will have a profound impact on the potential landfall location.

The direction of travel is categorized as “north” if its direction is primarily toward the north, “west” if its direction is primarily toward the west, and “other” otherwise (see Fig. 1). If a hurricane is stationary, its direction is classified as other. The cutoff between north- and west-moving hurricanes is at approximately 0.7 radians west of north, rather than the more natural north-northwest division of *π*/4. This cutoff was selected to minimize the occurrence, in the historical database, of hurricanes that changed direction between west and north more than once. The state of the hurricane at time step *t* is denoted as *s _{t}* = (lat

*, long*

_{t}*, dir*

_{t}*). The hurricane dissipates at time*

_{t}*T*, so a hurricane track can be denoted

*s*= (

*s*

_{1},

*s*

_{2}, . . .,

*s*,

_{t}*s*

_{t+1}, . . .,

*s*). When hurricanes are not in the 10°–25°N and 55°–80°W region, they are not differentiated by direction of travel. Therefore, there are a total of 70 × 100 + 2 × 15 × 25 = 7750 possible states.

_{T}The Atlantic basin hurricane database (HURDAT) dataset (Jarvinen et al. 1984) contains the tracks for Atlantic hurricanes and tropical storms. Although HURDAT is a best-track dataset in that storm location and intensity estimates were determined via postanalysis for all Atlantic hurricanes from 1851 to 2004, only the storms from 1950 through 2002 are used here. The location of a hurricane center as recorded in the HURDAT dataset is latitude and longitude to the nearest tenth of a degree. These values are used to calculate the hurricane direction of travel. The direction of a hurricane at time *t* depends on the changes in latitude and longitude between time *t* − 1 and time *t*. The direction for the first recorded location of a given hurricane is therefore undefined, and is assigned to the same value as the direction in the next time step. Each observed position and direction of the 538 storms in the dataset are categorized into one of the 7750 states, and each track is defined as a sequence of observed states. The historical database contained storm observations in only 3333 of the 7750 possible states; only these states were used in the model, with the addition of *j* = 0 as the state that indicates that a storm has terminated. The set of 3334 states is called the state space and is denoted Ω.

### c. Transition probabilities

Hurricane motion is described by transitions among the states, which occur at discrete time steps of 6 h. If a hurricane is in a given state *s _{t}* =

*j*at time

*t*, then at time

*t*+ 1, it will be in state

*k*with probability

*q*=

_{jk}*P*(

*s*

_{t+1}=

*k*|

*s*=

_{t}*j*). The value

*q*is called a transition probability, and is a function only of

_{jk}*j*and

*k*, not of time. Speed and direction of travel of a hurricane in state

*j*are reflected in the states to which it can transition (i.e., states

*k*for which

*q*> 0). Although

_{jk}*q*is defined for all

_{jk}*j*,

*k*pairs, most of these probabilities are equal to zero, as storms rarely move to distant cells in one 6-h period. In addition, hurricanes can remain in the same state for more than one time step; that is,

*q*> 0 is allowed. The number of transition probabilities is 3334

_{jj}^{2}, of which 9445—less than 0.1%—are nonzero.

### d. Strike probabilities

A strike is defined to occur at a given geographic location (target) if the hurricane center passes through the 1° latitude × 1° longitude cell containing the target, or in one of the adjacent cells to the north, south, east, or west of the stationary target, or in the diagonal cells to the southeast and southwest of the target, but not the cells to the northwest and northeast. This reflects the fact that in general the extent of hurricane force winds is greater on the right-hand side. For a given target, a set *κ* of states are in the strike zone; any storm that passes through one of these states strikes the target. The number of states in the strike zone ranges from 7 to 21 depending on the location of the target. For targets whose strike zone is within the region (10°–25°N and 55°–80°W), where direction is also a state variable, there are 7 × 3 = 21 cells in the strike zone. For targets whose strike zone does not overlap with the 10°–25°N and 55°–80°W region, there are only seven states in the strike zone.

*j*∈ Ω, the instantaneous strike probability is denoted

*p*and defined as the probability that a hurricane passing through state

_{j}*j*will eventually strike the target. For a given target, the values of

*p*are the solutions to a set of simultaneous equations,

_{j}*t*is contained in the state of the hurricane, and therefore the probability that a hurricane in state

*j*will eventually strike, conditional on the information at time

*t*, is

*p*. For information to be considered good in a given state, which indicates accuracy is high and uncertainty is low, the state strike probability should be close to zero or close to one.

_{j}As reflected in Eq. (1), the value *p _{j}* depends on the strike probabilities in the next time step, which in turn reflect strike probabilities in the following time step. However,

*p*compresses the future probabilities into a single, scalar value. A value of

_{j}*p*= 0.5 could reflect that in the next 6 h, all uncertainty will be resolved and either

_{j}*p*= 1 or

_{k}*p*= 0 ∀

_{k}*k*such that

*q*> 0. Alternately, it is possible to have

_{jk}*p*= 0.5 in a state

_{j}*j*from which information will not improve in the next 6 h (

*p*= 0.5 ∀

_{k}*k*such that

*q*> 0), or something in between. The Markov model completely describes probabilistic evolutions among states, through all the possible terminal states of the hurricane. Based on the Markov model, the way uncertainty will be resolved can be used quantitatively in decision making.

_{jk}As the hurricane evolves through many states, its instantaneous strike probability also evolves; *p*_{s1}, *p*_{s2}, . . . , *p _{sT}*. Sometimes the strike probability will increase (decrease) monotonically. For example, a hurricane may form and then move directly toward (away from) the target such that its strike probability is increasing (decreasing) throughout its progress. On the other hand, a hurricane may evolve through states with increasing strike probability, then change course and head away from the target, so that its strike probability declines. As will be shown in section 5, this nonmonotonicity can lead to a high rate of false alarms if the value of waiting is neglected.

### e. Data fitting and calibration

The transition probabilities were derived from the climatological data in the HURDAT dataset (Jarvinen et al. 1984) using hurricane positions at 6-h intervals for the 538 tropical cyclones between 1950 and 2002. The transition probability between two states *j* and *k* is denoted as *q _{jk}*, and is set equal to the fraction of all hurricanes in the database that passed through state

*j*that then moved to state

*k*in the next observation. The probability distribution of hurricane formation across states is denoted as

*r*, where

*r*is the relative frequency (fraction) of the historic hurricanes in the database that formed in state

_{j}*j*.

Forecast tracks are not defined in this model. However, the probability distribution about the most likely track that results from the forward propagation of the storm state using the Markov transition probabilities is well calibrated to the NHC strike-probability forecasts. Table 1 compares the maximum strike probability at 12-, 24-, 36-, 48-, and 72-h lead times for the NHC forecasts and for the Markov model. A simulation based on the model can be used to develop a probability distribution of its future locations that implies a most likely future track. An example of such a 72-h simulation is given in Fig. 2a and compared with an NHC forecast track and strike-probability ellipses (Fig. 2b) for Hurricane Isabel. The most likely tracks generated by our model are not necessarily close to the NHC forecast tracks, because as discussed earlier the Markov model is not highly skillful, as it is designed for its description of uncertainty rather than for forecast accuracy.

## 3. Modeling tropical cyclone preparations

The real-world problem that we model occurs each time an Atlantic tropical cyclone forms. Each tropical cyclone is treated as an independent event. The problem is viewed from the perspective of a single decision maker with assets at a fixed geographical location, which we call the target. The decision maker can make preparations that will reduce the damage caused by the hurricane if it strikes at the target location. For example, given enough lead time and a good forecast, ships and aircraft can be moved from the path of the hurricane, homeowners can board up their doors and windows, and people can evacuate. However, preparation is costly and/or its effectiveness depends on the lead time at which it is initiated. The decision to initiate preparation must be made on the basis of incomplete information, that is, the forecast, which is the best information available at the time.

### a. The alternatives

Usually, analysis of the value of forecasts is based on the assumption that the lead time required to complete a preparation is fixed, and/or that there is only one possible preparation action. However, many decision makers have more flexibility than this assumption reflects. For example, when a tropical cyclone threatens a naval installation, a set of predetermined disaster preparedness actions are implemented. These “conditions of readiness” are based on the anticipated arrival of sustained 50-kt winds. To avoid unnecessary preparations as much as possible, decision makers would like to wait until the last possible opportunity to initiate a preparation. However, the timing of the last possible opportunity is not precisely determined. First, the lead time remaining before a strike, or before conditions that will hamper further preparation, is uncertain. Delaying increases the risk that there will not be enough lead time to complete a preparation. Second, the amount of lead time required to complete a preparation may be flexible. For example, in 2004, Hurricane Charley intensified rapidly immediately before landfall, causing the U.S. Navy to order a sortie of ships from Mayport, Florida, with less lead time than they would usually allow. This is evidence that, if necessary, a partial evacuation preparation can be completed in a shorter time at greater cost (or lesser effectiveness). Moreover, decision makers can reevaluate their decisions every time a forecast is updated, and decide to prepare, abandon previous preparations (if the decisions are staged), or delay further.

To model this flexibility, we expand the decision maker’s alternatives so that he or she has a sequence of decisions at discrete time steps. We model the decision for one type of preparation action. Many types of preparation available to a single decision maker (e.g., to sortie ships from port, and evacuate personnel from the port) would be modeled separately. Preparation is therefore modeled as binary, as in many cost:loss problems, including in Considine et al. (2004). At each time step, the chosen action is denoted as *a*, where *a* = 1 if preparation is chosen, and *a* = 0 if delay is chosen. The action will be a function of the state of the hurricane, *s _{t}* at time

*t*, defined in the next section, and will therefore be denoted

*a*. Because of the memoryless property of the Markov model, the decision for a hurricane in a given state will not depend on

_{s}*t*, and therefore the subscript

*t*is suppressed. However, preparation can be made no more than once per hurricane. The hurricane preparation decision is now framed as an optimal timing problem—a decision of when, not whether, to prepare.

### b. Preparation cost profile

The static cost:loss framework is equivalent to assuming that before a certain point, which we call the critical lead time *τ*_{crit}, the cost of preparation has a constant value *C*_{crit}, and after that point, no preparation is possible. Adding flexibility to the model implies that even after *τ*_{crit} passes, there are still preparation actions available that would reduce the amount of damage sustained if a hurricane struck the target. However, it is fair to assume that these actions are more costly and/or less effective than preparation at *τ*_{crit}. For example, removing boats from the water, but not from the threatened region would reduce damage, but not as much as sailing them entirely out of the way of the hurricane.^{4} The cost of preparation depends on the lead time remaining at the time the preparation is initiated, which is taken to be immediately following the decision. Therefore, we model cost as a function of the minimum possible remaining time before a hurricane strikes the target, denoted *τ*, which captures the increase in the cost of preparation if the action is taken with urgency, even if the actual lead time turns out to be longer than the minimum.^{5} Each decision maker has only one cost function for a given preparation action, which depends on the parameter *τ*_{crit}.^{6}

Costs and losses are normalized such that the mitigable portion of the damage caused by a hurricane striking an unprepared target is *L* = 1. The value of *L* includes all damage that could be reduced or avoided by preparation, including loss of life and injuries. Viscusi and Aldy (2003) cite estimates of the value of a statistical life (used in analysis of the value of reducing fatality risks) that range from $0.9M to $20.8M in year 2000 dollars, which is enough to pay for a considerable, but not unlimited, amount of preparation. The Environmental Protection Agency used a baseline value of $6.1M (in 1999 dollars) per statistical life in a study of water contamination (Stedge 2000). At $6.1M per life saved, a forecast leading to an evacuation for the Galveston hurricane of 1900 alone would have been worth almost $50B (in 1999 dollars). Lumping and balancing economic damages with risks to life and health is morally and practically difficult, but it is an unavoidable responsibility of government decision makers, not only in emergency management and planning, but in environmental protection, homeland security, occupational safety, and many other public functions.

The preparation cost *C*(*τ*) is a fraction of the maximum mitigative loss, and is equal to *C*_{crit} at *τ* = *τ*_{crit}, and strictly increases, to approach *L* as *τ* declines to zero. This reflects the assumption that there is always a way to mitigate the effects of a strike at least somewhat, and no preparation that is more costly than hurricane damage will be considered.

The shape of the cost function is specific to a decision maker’s vulnerability, alternatives, and costs. The alternatives may vary by context; for example, there may be preparations that cannot be initiated at night, as a given preparation might have different cost curves depending on the time of day. To conceptually illustrate the value of dynamic optimization, we examine the performance of dynamic and static policies using both a linear function and an exponential function, with each increasing from *C*(*τ* = *τ*_{crit}) = *C*_{crit} to *C*(*τ* = 0) = *L*. Murphy and Ye (1990) use a similar, exponential cost function. We further assume that the cost:loss ratio at the critical lead time (*C*_{crit} = *C*_{crit}/*L*) is 0.1 (i.e., that the cost of preparing is 10% of the mitigative portion of the loss). The actual ratio depends on the decision maker’s context, and each independent preparation action by a single decision maker has its own cost:loss ratio. The 10% value was selected because Considine et al. (2004) estimated the cost:loss ratio for the oil rig evacuation decision at approximately 9%, and 0.1 is approximately the ratio implied by the average length of NHC coastline warnings together with the 24-h cross-track forecast errors. Wilks (1991) also used 10% as the minimum cost:loss ratio. To illustrate how the value of dynamic optimization depends on the cost profile, we vary *τ*_{crit} (Fig. 3).

The decision maker’s objective is to minimize the expected total cost of any hurricane. The expected total cost is a function of the hurricane path (i.e., whether it strikes the target) and of the decisions (*a _{j}*). The total cost for a given hurricane is equal to the preparation cost if preparation is ordered, the mitigative loss if there is no preparation and the hurricane strikes the target, or zero if there is no preparation and no strike.

### c. The forecast

The information available to the decision maker at each decision point *t* is *p _{j}*, which is simply a strike probability conditional on information available to that time, as contained in the state

*s*=

_{t}*j*. For the dynamic policy, the decision maker also has more information about the future evolution of the hurricane, conditional on each state of future information. This information is contained in the Markov model. Track forecasts are not parameters in the decision rules presented here, although in practice they determine the strike-probability forecasts (Crutcher et al. 1982; Gross et al. 2004).

## 4. Dynamic decision making with the Markov model

We combine the Markov model with the dynamic decision model to show how the static and dynamic frameworks can be used to generate decisions, and compare the performance of policies under the two frameworks.

### a. The policies

A policy, denoted as *π*, is a complete description of the action that a decision maker will take in any possible scenario. Each scenario corresponds to a state in the Markov hurricane model. Therefore, a decision maker following policy *π* will take action *a _{j}* =

*π*(

*j*) ∈ {0, 1} from each state

*j*.

*π*. The strike probability

_{S}*p*, defined in section 2, is a function of the state

_{j}*j*and a given stationary target location. In the static framework, the decision rule for each state

*j*is to prepare if the cost of preparing

*C*(

*τ*) is less than the expected loss. The static decision rule, which defines the policy

_{j}*π*is therefore

_{S},*τ*

_{crit}, the static decision rule is reapplied at each decision point. If the preparation has not already been undertaken, preparation can be accomplished at a cost determined by the minimum remaining lead time according to the function

*C*(

*τ*). The policy is called static because the decision rule does not account for future updated forecasts and opportunities to prepare.

_{j}There are two reasons that late preparations may occur under the static policy. First, hurricanes may form such that their lead time is already less than *τ*_{crit}. Second, a hurricane may have a low strike probability at *τ*_{crit}, and later transition into a state whose strike probability triggers a preparation under the static policy. In each case, the preparation is allowed at a cost of *C*(*τ* < *τ*_{crit}).

The performance of the dynamic policy, relative to the static policy, reflects the value of anticipating more accurate future forecasts, over and above the value of monitoring updated forecasts and taking appropriate action. The dynamic policy *π _{D}* takes advantage of the stochastic hurricane model by planning for the opportunity to take action later, and quantifying the value of future scenarios that can arise conditional on each possible updated forecast. Each state in the Markov model is associated with a value, denoted

*V*, and its value depends on the values for all other states, and through this value, on the actions in other states. Therefore,

_{j}*V*is a function of the policy

_{j}*π*. Specifically,

_{D}∀

*j*∈*κ*,*V*= 1, which reflects that if the hurricane reaches this state without a prior preparation, then the mitigable loss is incurred;_{j}∀

*j**κ*, such that*a*(_{j}= π_{D}*j*) = 1,*V*=_{j}*C*(*τ*), which reflects the cost of preparation; and_{j}∀

*j**κ*, such that*a*=_{j}*π*(_{D}*j*) = 0,*V*= Σ_{j}_{k∈Ω}*q*._{jk}V_{k}

*V*reflects the expected total cost to the decision maker of a hurricane in that state, including both the possibility of a strike on an unprepared target and the costs of possible preparations.

_{j}*C*(

*τ*) is less than the expected total cost associated with the state of the hurricane at the next time step; that is,

_{j}*t*, measured either from the hurricane’s formation or from its terminal state. Therefore, a computationally less demanding policy iteration method was used, as follows:

Start, let

*a*=_{j}*π*(_{S}*j*) ∀*j*;step 2, solve the system of simultaneous equations represented by

*V*= Σ_{j}_{k∈Ω}*p*∀_{jk}V_{k}*j**κ*,*a*= 0, with the boundary conditions_{j}*V*= 1 ∀_{j}*j*∈*κ*, and*V*=_{j}*C*(*τ*) ∀_{j}*j**κ*,*a*= 1;_{j}step 3, ∀

*j**κ*, let*a*= 1 if_{j}*C*(*τ*) ≤_{j}*V*= Σ_{j}_{k∈Ω}*p*and_{jk}V_{k}*a*= 0 otherwise; and_{j}step 4, check whether

*a*has changed for any_{j}*j*∈ Ω in this iteration. If not, the optimal dynamic policy*π*(_{D}*j*) =*a*, ∀_{j}*j*∈ Ω. Otherwise, repeat steps 2–4.

### b. Expected total cost

The expected total costs (Figs. 4a–d) of the static and dynamic policies are computed for targets at Norfolk and Galveston using both the linear and exponential cost functions. The value of *τ*_{crit} is also varied from 6 to 120 h to represent varying degrees of flexibility. The value of a forecast is equal to the difference between the expected total cost using the forecast and the no-skill expected total cost. The value decreases with *τ*_{crit} because even with perfect information at the time a hurricane forms, the cost of preparation is higher for a cost function based on a long critical lead time.

The white area in Fig. 4 represents the reduction in expected total cost due to the dynamic framing. This savings is also expressed as a percentage improvement relative to the static policy’s performance, and is plotted as a solid line. As an additional reference, the expected total costs under perfect information and under a no-skill rule, which is defined as limited to climatological information, are also shown. Under perfect information, the decision maker knows whether the hurricane will strike the target as soon as the hurricane forms [i.e., the expected total cost under perfect information = Σ_{j∈Ω}*r _{j}p_{j}C*(

*τ*)]. The no-skill rule compares the state-specific cost of preparation with the historical probability of a hurricane striking the target, not conditional upon the state. The expected total cost under no skill is Σ

_{j}_{j∈Ω}

*r*min[

_{j}*C*(

*τ*),

_{s}*], where*p L

*is simply the mean of*p

*p*over all

_{j}*j*. The savings resulting from the dynamic optimization vary from 0% to 6% for Norfolk and 0% to 8% for Galveston, depending on the shape of the cost function and the value of

*τ*

_{crit}. In both locations, the largest percentage improvement can be extracted by decision makers whose

*τ*

_{crit}is in the range of 24–48 h. For Norfolk, there is a secondary increase in percentage improvement between 72 and 96 h. This contributes to a jagged appearance of the percentage improvement curve, which is partly attributed to the fact that at this time interval from the target cell of Norfolk, many tropical cyclones that have tracked westward across the tropical North Atlantic will either begin a turn toward the north (i.e., recurve) and toward Norfolk or move straight westward and away from Norfolk. Delaying preparation in this interval therefore yields a substantial improvement in information regarding landfall. This secondary maximum in the percent improvement curve does not appear in the curves for Galveston (Figs. 4c,d) as there is generally no bifurcation point in the track where a recurve or straight track follows that affects landfall at Galveston. A jagged appearance is also caused by the discretization of the model, as discussed further below.

The high-value periods reflect a stage of hurricane evolution during which information about the relevant event—landfall at the target—is improving quickly. For decision makers with *τ*_{crit} ≤ 24 h, forecasts are already quite accurate at *τ*_{crit} and there are few remaining opportunities to reevaluate a preparation decision. Decision makers with *τ*_{crit} ≥ 72 h do not have very much flexibility to respond to improving forecasts, because their costs of preparation become prohibitive before the time forecast accuracy is high. By contrast, decision makers with *τ*_{crit} in the 24–60-h range can wait an extra 6 or 12 h and gain a large benefit in terms of improved accuracy. They can gain substantially from planning for their future opportunities to prepare after the next forecast update. For some decision makers (at Norfolk, with *τ*_{crit} = 96, 102, and 108 h) the value of framing the decision dynamically exceeds the value of reducing *τ*_{crit} by 6 h. The additional analysis required to anticipate updated forecasts is likely to be less expensive than investments to reduce preparation lead time.

These results are dependent on the model specification, which is relatively simple, though well calibrated with the NHC strike-probability ellipses. The magnitude of the results also depends on the estimation of the model parameters, and in particular on the transition probabilities *q _{jk}*, which were estimated from a finite historical database. As the model is formulated, however, the additional value derived from dynamic optimization is necessarily nonnegative, and depending on the cost function, the lead time, and the location of the target, could exceed the maximum savings achieved in our numerical examples. The difference between dynamic and static optimization is of similar magnitude to the differences found by Wilks (1991) with the same minimum cost:loss ratio, though he modeled a different decision process and a different meteorological event.

Under our cost assumptions, no value is gained in expectation from using later forecasts for decision makers at Galveston with *τ*_{crit} ≥ 72 h. These decision makers are better off waiting, unprepared, for the hurricane because only about 4.5% of hurricanes in the historical database strike at Galveston. Given that the minimum cost:loss ratio assumed for the preparation action in this example (10%) is higher than the overall strike rate, the quality of forecasts at long lead times is too low to make responding to them cost effective. In the model, for a decision maker with a critical lead time of 72 h, the cost of preparation is 70% as large as a loss by the time lead time has declined to 24 h. Even with perfect information, very little flexibility exists for delayed preparations. This result does not imply that in reality decision makers at Galveston do not benefit from forecasts, because generally some preparation is available even with very short lead times that can mitigate losses somewhat. The cost function should be interpreted as corresponding to a single preparation type. For example, it might be true that the only way to sortie ships when the lead time is 24 h is to hire tugboats at emergency rates to take ships upriver, reflected in a high cost *C*(*τ* = 24), but there may be other preparatory actions available at 24-h lead times that can reduce loss substantially (i.e., for many decision makers there are at least some actions whose cost function is pushed to the right).

In some cases (although not in the results shown here), the static decision rule can produce counterintuitive results when the static performs worse than the no-skill policy. The reason is that repeatedly reevaluating a decision to take an irreversible action in a static framework can be worse than making a decision and sticking with it regardless of future information. As a hurricane evolves, its strike probability can both increase and decrease, and the changes will not necessarily be monotonic. If the trigger for irreversible and costly preparation is set at *p _{j}* =

*C*(

*τ*)/

_{j}*L*, then the preparation is likely to be undertaken when the hurricane strike probability is higher than it is through most of its track. By repeatedly applying this decision rule with the action trigger set at a point that is applicable for a one-time decision, the decision maker will tend to prepare too often. If the rule is reapplied, a tight trigger that is optimal for a one-time decision will lead to overpreparation.

The estimates of the expected total cost of each policy and of the percent improvement of the dynamic approach are functions of the storm model. Like any model, its formulation is an inexact representation of the real system, and the parameter estimates are dependent on the dataset used to fit the model. To explore the impact of sampling variability on our results, we run a bootstrapping process. For each of 100 iterations, we generate a sample from among the 538 storms in the dataset. The sample size is 200, which represents a balance between preserving sampling variability by having a small sample size relative to the entire database, while keeping the sample large enough that the dataset is not too sparse to generate useful transition probabilities.

At each iteration, the parameters *q _{jk}*,

*p*, and

_{j}*r*are calculated as described in section 2, based on the 200 storms in the sample. Once the parameters are fitted, we reproduce the thick line in Fig. 4a. The static and dynamic policies are applied for each

_{j}*τ*

_{crit}, from 6 to 120 h, using

*C*

_{crit}= 0.1, normalized to

*L*= 1 as in sections 3 and 4a, using an exponential cost function, for a target at Norfolk. Then the percentage savings, or reduction in expected total cost of the dynamic policy relative to the static policy, is calculated.

The mean, median, and 5th and 95th percentiles of the savings are shown in Fig. 5a as a function of *τ*_{crit}. The overall shape of the curve is very similar to the curve calculated using all the storm tracks in the parameter calculations, shown as the thick line in Fig. 4a. The greatest savings are 7.2% for the mean in Fig. 5a and 6.2% for the results in Fig. 4a. In Fig. 5a, the largest savings are for decision makers with *τ*_{crit} in the range of 36–48 h, whereas in Fig. 4a, the peak savings are for decision makers with *τ*_{crit} = 30 h. A difference between the two figures is that the dynamic savings drop to nearly zero (0.02%) in Fig. 4a, but in Fig. 5a we see that although the savings drop off, they are still 2%–3% for *τ*_{crit} = 120 h.

A second noticeable difference between the two curves is that in Fig. 4a, the dynamic savings curve is quite jagged. In particular, there is a major dropoff in savings for *τ*_{crit} = 72 h. The jaggedness is a simple artifact of the discretization of the model: each *τ*_{crit} changes the cost curve, and changes the preparation cost and optimal action in many states under each policy. The jaggedness is averaged away in Fig. 5a, as the mean, median, and percentiles are all taken for each value of *τ*_{crit}. Figure 5b shows the percentage improvement for each *τ*_{crit} for 10 of the samples, which retain the jaggedness of the thick line in Fig. 4a.

### c. Simulation

The expected total cost results reflect a balancing of two effects. On the positive side, the dynamic policy prevents false alarms when preparation is delayed and updated forecasts show that preparation is not necessary. The negative effect of the dynamic policy arises when updated forecasts show that a strike is more likely than it appeared earlier. This leads to either a delayed, and therefore more costly, preparation, or a greater risk of a strike at an unprepared target.

To examine the contributions of these positive and negative outcomes to the expected total cost, the static and dynamic policies are also evaluated on the basis of a simulation. Ten thousand hurricane tracks were generated by Monte Carlo simulation using the historical distribution of the location of hurricane formations and the Markov transition probabilities. Of the simulated hurricanes, 8.3% strike at Norfolk, and 4.5% at Galveston, as compared with 10.0% and 4.6% of historical hurricanes, respectively. The expected total cost for the static and dynamic policies for Norfolk using an exponential cost function (Fig. 6) is broken down by the type of cost: necessary preparations (for hurricanes that eventually strike the target), false alarms (preparations for hurricanes that do not strike), and unprepared strikes.

The frequency of outcomes for each policy (Fig. 7) is examined for the simulated hurricanes. The number of false alarms is about 1500 (about 15% of all hurricanes) under the dynamic policy and about 2000 (about 20% of hurricanes) under the static policy; that is, the dynamic framing averts about a quarter of all false alarms. The number of false alarms drops off for short critical lead times (less than 24 h) because the relevant forecasts are more accurate, and for long critical lead times because more often no preparation is optimal under either policy.

This savings is partly offset by a slightly greater number of unprepared strikes and delayed—and therefore more expensive—preparations under the dynamic policy. Although the expected total cost is lower under the dynamic policy, unprotected strikes make up a larger portion of the expected total cost, as the savings in reduced false alarms are partially offset by an increase in delayed preparations and unprepared strikes. Unprepared strikes make a larger contribution to the expected total cost (Fig. 6) than their numbers (Fig. 7) indicate because each unprepared strike costs more than a preparation. The number of storms that strike unprepared targets (the dashed lines in Fig. 7) is slightly higher under the dynamic policy (a maximum 1.23% higher, and typically ≪ 1% higher, expressed as a percentage of storms). In addition, the dotted line shows the number of delayed preparations. For these hurricanes, both policies call for a preparation, but the dynamic policy delays the preparation, usually incurring a higher cost (sometimes the cost is not higher because the minimum possible lead time does not increase).

### d. Discussion

Although it is quite simple, the static decision rule is not an unrealistic straw man. First, it is consistent with prescriptive decision modeling in the literature. Considine et al. (2004) use a static decision rule in their prescriptive model of oil rig evacuation and shutdown decisions based on hurricane forecasts. To the extent that their decision making is quantitatively based on forecasts, decision makers are likely to be following a rule similar to the static rule, and in fact this is what is suggested by Jarrell and Brand (1983). The most detailed information officially available is the NHC strike-probability forecast, which would support a static decision rule, but it would not support a dynamic decision process. Some decision makers may intuitively adjust for the fact that they anticipate a significant reduction in uncertainty, but they would have to be very familiar with tropical cyclones to be able to do this effectively.

## 5. Real-time decision making

The previous section indicated that there is value in planning for future forecasts using a dynamic policy, over and above the value of monitoring and responding to updated forecasts using the static decision rule. However, dynamic optimization could not be widely implemented in real time, partly because the decision model is specific to the decision maker’s cost profile. It is not as general as a strike-probability forecast, which applies to every decision maker with assets at a given location. A second reason this process would be difficult to implement in practice is that the stochastic model required for dynamic optimization is not designed for forecasting, and would not have nearly the accuracy provided by current numerical weather prediction models. Ideally, it would be possible to create a highly detailed atmospheric model that was fully stochastic and therefore supported dynamic optimization and at the same time gave highly accurate predictions with long lead times. In a single model, a trade-off must be made between atmospheric detail and stochastic information.

How then can the value of planning for future forecasts be extracted in real-time hurricane preparation decisions? One approach is to develop an information forecast that quantifies how information about a relevant weather event can be expected to improve in the future or, equivalently, how uncertainty will be resolved. The information forecast could take the form of a time profile of information quality or of a measure of uncertainty over future forecast updates.

A real-time information forecast could be used in combination with track and strike-probability forecasts and a decision maker’s specific alternatives and cost profile to develop a decision rule that approximates dynamic decision making. An individual decision maker’s choices would be a function of the following:

The available alternatives, at each lead time, and their costs;

the remaining lead time for a given hurricane;

the strike probability and, ideally, the probability of each intensity or wind speed at the target location; and

the anticipated information improvement that is represented in the information forecast.

In general, the necessity of costly preparation increases as lead time declines, and increases with strike probability for the target location. The desirability of immediate preparation also decreases when information quality is expected to improve in the near future. For example, it might be optimal for a given decision maker to prepare with a 72-h lead time when the strike probability is 50% for low anticipated information, but optimal to delay in the same situation, if an information forecast indicates high anticipated information. Depending on the form of the forecast, a decision maker would use it together with the other relevant factors to balance the value of waiting for improving forecasts with the value of undertaking preparation at a long lead time.

As in the design of any forecast, the goal is to take a large amount of information and distill it into a form that is accessible and understandable to users and simultaneously make it as valuable to them as possible. These information forecasts would represent a reduction of the information from a complete stochastic model, but they would be more informative than a scalar instantaneous strike probability.

In designing an information forecast, several trade-offs must be considered. One trade-off is its degree of reliance on historical information, such as track errors, versus measurable characteristics of individual hurricanes including speed, intensity, location, and even consistency—all of which are related to forecast error. A hurricane-specific measure could even quantify some of the information available in consensus forecasting (Goerss 2000) and in the systematic approach introduced by Carr et al. (2001). For example, the measure could depend on the level of certainty as reflected in the agreement among tracks resulting from multiple atmospheric models.

A second dimension in the design of an information forecast is its level of specificity to a decision maker’s context. At one extreme, an information forecast could be as general as an accuracy profile applicable to all hurricanes, which is no more informative than plotting the average track error as a function of lead time. For a given decision maker’s cost profile, this could be used to achieve a rough understanding of the trade-off between lead time and accuracy. An information forecast that was designed to take different values for different target locations would have the potential to be more valuable in approximating dynamic decisions. It is natural to think about information quality as dependent on the geographic location of interest. For example, accurate information about landfall at Caribbean locations is available earlier than accurate information about landfall along the Gulf coast.

An information forecast could even be designed as a function of a specific decision’s objective. For example, it could reflect the probability of the best decision in a given context changing in the next 6 or 12 h. An information forecast specific to a given decision maker could include economic information by quantifying the value of waiting. There is a trade-off between the portability of a low-specificity forecast and the potential value that can be extracted by each decision maker. Users with a good understanding of their cost profile and a lot of flexibility in the period during which forecast information improves rapidly would benefit from tailored information forecasts and from tailored decision rules.

The NHC method for generating strike-probability ellipses is an example of a compromise in these dimensions. The hurricane track is specific to the hurricane, but the probability distribution around each track point is based on purely historical parameters (Crutcher et al. 1982; Sheets 1985). It is not specific to the decision maker’s cost profile, but it is specific to each target location.

## 6. Conclusions

The current paper models decision making with respect to a sequence of up to 20 interrelated forecasts. We have developed and integrated a climatology-based Markov storm model with a dynamic decision model, and estimated the value of dynamic decision making. This framework allows for the explicit anticipation of improving, updated forecasts.

The results indicate that a decision maker who has the flexibility to wait for updated hurricane forecasts can extract a substantial value from adopting a dynamic approach. For some decision makers, the value of framing the decision dynamically exceeds the value of reducing *τ*_{crit} by 6 h. Improving the decision process to capture this value is likely to be less expensive than investments to reduce preparation lead time.

The frequency and predictability of storms in the western North Pacific suggest that a dynamic approach to anticipating improving forecast accuracy would be even more valuable for typhoons. The value of the dynamic framework depends on the decision maker’s location, preparation alternatives, and cost profile. We estimate the added value for a multiperiod cost:loss framework, with a single preparation action and binary weather outcomes. The approach can be expanded to include multiple weather conditions, such as varying wind speeds, as well as staged preparation actions.

The insights gained in this work could be utilized in an operational setting by elaborating upon the alternatives and cost profiles of individual decision makers, such as fleet commanders who must decide whether to sortie ships, and expanding the state space of the Markov model appropriately, for example by including intensity.

Another way to adapt this approach to real-time decision making is to develop forecasts of improving information quality that could be used in combination with strike-probability forecasts to evaluate the trade-off between lead time and forecast accuracy, estimate the value of waiting for improving forecasts, and thereby reduce false alarms. An information forecast would complement the increasingly accurate track forecasts and the new NHC strike-probability product that will include multiple weather conditions (Gross et al. 2004).

## Acknowledgments

This research has been sponsored in part by the Office of Naval Research, Marine Meteorology Program. The authors acknowledge valuable comments from Prof. R. Elsberry, Prof. C. Wash, Prof. K. Wall, and the anonymous reviewers.

## REFERENCES

Adams, C. R., and Berri D. J. , cited. 2004: The economic cost of hurricane evacuations, 1999. First U.S. Weather Research Program Science Symposium. [Available online at http://box/mmm.ucar.edu/uswrp/abstracts/Adams_Christopher.html.].

Adams, R. M., Bryant K. J. , McCarl B. A. , Legler D. M. , O’Brien J. J. , Solow A. , and Weiher R. , 1995: Value of improved long-range weather information.

,*Contemp. Econ. Policy***13****,**10–19.Baker, E. J., 2002: Societal impacts of tropical cyclone forecasts and warnings.

,*WMO Bull.***51****,**229–235.Brown, B. G., Katz R. W. , and Murphy A. H. , 1986: On the economic value of seasonal precipitation forecasts: The fallowing/planting problem.

,*Bull. Amer. Meteor. Soc.***67****,**833–841.Carr L. E. III, , Elsberry R. L. , and Peak J. E. , 2001: Beta test of the systematic approach expert system prototype as a tropical cyclone track forecasting aid.

,*Wea. Forecasting***16****,**355–368.Considine, T. J., Jablonowski C. , Posner B. , and Bishop C. H. , 2004: The value of hurricane forecasts to oil and gas producers in the Gulf of Mexico.

,*J. Appl. Meteor.***43****,**1270–1281.Crutcher, H. L., Neumann C. J. , and Pelissier J. M. , 1982: Tropical cyclone forecast errors and the multimodal bivariate normal distribution.

,*J. Appl. Meteor.***21****,**978–987.Dow, K., and Cutter S. , 1998: Crying wolf: Repeat responses to hurricane evacuation orders.

,*Coastal Manage.***26****,**237–252.Epstein, E. S., and Murphy A. H. , 1988: Use and value of multiple-period forecasts in a dynamic model of the cost–loss ratio situation.

,*Mon. Wea. Rev.***116****,**746–761.Goerss, J. S., 2000: Tropical cyclone track forecasts using an ensemble of dynamical models.

,*Mon. Wea. Rev.***128****,**1187–1193.Gross, J. M., DeMaria M. , Knaff J. A. , and Sampson C. R. , 2004: A new method for determining tropical cyclone wind forecast probabilities. Preprints,

*26th Conf. on Hurricanes and Tropical Meteorology,*Miami, FL, Amer. Meteor. Soc., 425–426.Jarrell, J., and Brand S. , 1983: Tropical cyclone strike and wind probability applications.

,*Bull. Amer. Meteor. Soc.***64****,**1050–1056.Jarrell, J., and DeMaria M. , 1999: An examination of strategies to reduce the size of hurricane warning areas. Preprints,

*23d Conf. on Hurricanes and Tropical Meteorology,*Dallas, TX, Amer. Meteor. Soc., 50–52.Jarrell, J., Mayfield M. , Rappaport E. N. , and Landsea C. W. , 2001: The deadliest, costliest, and most intense United States hurricanes from 1900 to 2000. NOAA Tech. Memo. NWS TPC-1. [Available online at http://www.aoml.noaa.gov/hrd/Landsea/deadly/index.html.].

Jarvinen, B. R., Neumann C. J. , and Davis M. A. S. , 1984: A tropical cyclone data tape for the North Atlantic: Contents, limitations, and uses. NOAA Tech. Memo. NWS NHC 22, 21 pp.

Katz, R. W., 1993: Dynamic cost–loss ratio decision-making model with an autocorrelated climate variable.

,*J. Climate***6****,**151–160.Katz, R. W., and Murphy A. H. , 1990: Quality/value relationships for imperfect weather forecasts in a prototype multistage decision-making model.

,*J. Forecasting***9****,**75–86.Katz, R. W., and Murphy A. H. , 1997:

*Economic Value of Weather and Climate Forecasts*. Cambridge University Press, 222 pp.Katz, R. W., Murphy A. H. , and Winkler R. L. , 1982: Assessing the value of frost forecasts to orchardists: A dynamic decision-making approach.

,*J. Appl. Meteor.***21****,**518–531.Katz, R. W., Brown B. G. , and Murphy A. H. , 1987: Decision-analytic assessment of the economic value of weather forecasts: The fallowing/planting problem.

,*J. Forecasting***6****,**77–89.Leigh, R., 1995: Economic benefits of Terminal Aerodrome Forecasts (TAFs) for Sydney airport.

,*Aust. Meteor. Appl.***2****,**239–247.McAdie, C. J., and Lawrence M. B. , 2000: Improvements in tropical cyclone track forecasting in the Atlantic basin, 1970–98.

,*Bull. Amer. Meteor. Soc.***81****,**989–997.Mjelde, J. W., and Dixon B. L. , 1993: Valuing the lead time of periodic forecasts in dynamic production systems.

,*Agric. Syst.***42****,**41–55.Murphy, A. H., and Katz R. W. , 1985:

*Probability, Statistics, and Decision Making in the Atmospheric Sciences*. Westview Press, 545 pp.Murphy, A. H., and Ye Q. , 1990: Optimal decision making and the value of information in a time-dependent version of the cost-loss ratio situation.

,*Mon. Wea. Rev.***118****,**939–949.Powell, M. D., and Aberson S. D. , 2001: Accuracy of United States tropical cyclone landfall forecasts in the Atlantic basin (1976–2000).

,*Bull. Amer. Meteor. Soc.***82****,**2749–2767.Puterman, M. L., 1994:

*Markov Decision Processes: Discrete Stochastic Dynamic Programming*. John Wiley and Sons, 649 pp.Roulston, M. S., and Smith L. A. , 2004: The boy who cried wolf revisited: The impact of false alarm intolerance on cost–loss scenarios.

,*Wea. Forecasting***19****,**391–397.Sheets, R. C., 1985: The National Weather Service hurricane probability program.

,*Bull. Amer. Meteor. Soc.***66****,**4–13.Stedge, G. D., 2000: Arsenic in drinking water rule economic analysis. EPA 815-R-00-026, Environmental Protection Agency, 257 pp.

Stewart, T. R., Katz R. W. , and Murphy A. H. , 1984: Value of weather information: A descriptive study of the fruit-frost problem.

,*Bull. Amer. Meteor. Soc.***65****,**126–137.Viscusi, W. K., and Aldy J. E. , 2003: The value of a statistical life: A critical review of market estimates throughout the world.

,*J. Risk Uncertainty***27****,**5–76.Whitehead, J. C., 2003: One million dollars per mile? The opportunity costs of hurricane evacuation.

,*Ocean Coastal Manage.***46****,**1069–1083.Wilks, D. S., 1991: Representing serial correlation of meteorological events and forecasts in dynamic decision-analytic models.

,*Mon. Wea. Rev.***119****,**1640–1662.Wilks, D. S., 1995:

*Statistical Methods in the Atmospheric Sciences*. Academic Press, 467 pp.Wilks, D. S., and Hamill T. M. , 1995: Potential economic value of ensemble-based surface weather forecasts.

,*Mon. Wea. Rev.***123****,**3555–3575.

(a) Strike probability for 72-h positions based on the Markov model of a hurricane in the 1° lat × 1° lon box centered at 68°N, 25°W. (b) As in (a) but showing the official NHC strike-probability forecast for Hurricane Isabel (information online at http://www.nhc.noaa.gov).

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

(a) Strike probability for 72-h positions based on the Markov model of a hurricane in the 1° lat × 1° lon box centered at 68°N, 25°W. (b) As in (a) but showing the official NHC strike-probability forecast for Hurricane Isabel (information online at http://www.nhc.noaa.gov).

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

(a) Strike probability for 72-h positions based on the Markov model of a hurricane in the 1° lat × 1° lon box centered at 68°N, 25°W. (b) As in (a) but showing the official NHC strike-probability forecast for Hurricane Isabel (information online at http://www.nhc.noaa.gov).

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

Sample cost profiles showing exponential and linear functions with 24- and 72-h critical lead times.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

Sample cost profiles showing exponential and linear functions with 24- and 72-h critical lead times.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

Sample cost profiles showing exponential and linear functions with 24- and 72-h critical lead times.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

(a) Expected total cost of a hurricane under the Markov model, following each policy, for a target at Norfolk, VA, with an exponential cost profile. (b) As in (a) but using a linear cost profile. (c) Expected total cost of a hurricane under the Markov model, following each policy for a target at Galveston, TX, with an exponential cost profile. (d) As in (c) but using a linear cost profile.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

(a) Expected total cost of a hurricane under the Markov model, following each policy, for a target at Norfolk, VA, with an exponential cost profile. (b) As in (a) but using a linear cost profile. (c) Expected total cost of a hurricane under the Markov model, following each policy for a target at Galveston, TX, with an exponential cost profile. (d) As in (c) but using a linear cost profile.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

(a) Expected total cost of a hurricane under the Markov model, following each policy, for a target at Norfolk, VA, with an exponential cost profile. (b) As in (a) but using a linear cost profile. (c) Expected total cost of a hurricane under the Markov model, following each policy for a target at Galveston, TX, with an exponential cost profile. (d) As in (c) but using a linear cost profile.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

(a) Percentage improvement of the dynamic model over the static model for 100 iterations of the Markov model in which the model parameters for each iteration are based on a subsample of 200 tropical cyclones from the dataset, chosen at random. Mean, median, and 5th and 95th percentiles of the 100 iterations are shown for each critical lead time. The results are for a target at Norfolk, VA, using an exponential cost profile, with critical lead times ranging from 6 to 120 h. (b) As in (a) butt showing the percent improvement in a subset of 10 of the 100 iterations used to construct (a).

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

(a) Percentage improvement of the dynamic model over the static model for 100 iterations of the Markov model in which the model parameters for each iteration are based on a subsample of 200 tropical cyclones from the dataset, chosen at random. Mean, median, and 5th and 95th percentiles of the 100 iterations are shown for each critical lead time. The results are for a target at Norfolk, VA, using an exponential cost profile, with critical lead times ranging from 6 to 120 h. (b) As in (a) butt showing the percent improvement in a subset of 10 of the 100 iterations used to construct (a).

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

(a) Percentage improvement of the dynamic model over the static model for 100 iterations of the Markov model in which the model parameters for each iteration are based on a subsample of 200 tropical cyclones from the dataset, chosen at random. Mean, median, and 5th and 95th percentiles of the 100 iterations are shown for each critical lead time. The results are for a target at Norfolk, VA, using an exponential cost profile, with critical lead times ranging from 6 to 120 h. (b) As in (a) butt showing the percent improvement in a subset of 10 of the 100 iterations used to construct (a).

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

Average total cost breakdown by hurricane based on a simulated set of hurricane tracks, for a target at Norfolk, VA, using an exponential cost profile.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

Average total cost breakdown by hurricane based on a simulated set of hurricane tracks, for a target at Norfolk, VA, using an exponential cost profile.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

Average total cost breakdown by hurricane based on a simulated set of hurricane tracks, for a target at Norfolk, VA, using an exponential cost profile.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

Frequency of each outcome for 10 000 simulated hurricanes using both static and dynamic policies. Hurricanes that are not shown either had a necessary preparation at the same time for both policies, or did not strike the target. The target is at Norfolk, VA, with an exponential cost profile.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

Frequency of each outcome for 10 000 simulated hurricanes using both static and dynamic policies. Hurricanes that are not shown either had a necessary preparation at the same time for both policies, or did not strike the target. The target is at Norfolk, VA, with an exponential cost profile.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

Frequency of each outcome for 10 000 simulated hurricanes using both static and dynamic policies. Hurricanes that are not shown either had a necessary preparation at the same time for both policies, or did not strike the target. The target is at Norfolk, VA, with an exponential cost profile.

Citation: Weather and Forecasting 21, 5; 10.1175/WAF958.1

A comparison of maximum strike probabilities at 12-, 24-, 36-, 48-, and 72-h lead times for the NHC forecasts (middle column; http://www.nhc.noaa.gov/HAW2/english/forecast/probabilities_printer.shtml) and for the Markov model (right column).

^{1}

Source: “Navy Meteorologists Recommend how Ships Should Respond to Storms,” *Daytona Beach News-Journal*, 25 June 2001.

^{2}

Estimated cost of $6M, plus $30M in maintenance including preparing docked ships to depart. Source: “Navy Costs For Isabel at Least $105.6 Million” *Norfolk Virginian-Pilot*, 27 September 2003.

^{3}

Because the weather outcome is modeled as binary, all specific hazards (wind, storm surge, precipitation, etc.) are encompassed in a “strike.”

^{4}

This suggests that there might be flexibility to prepare for a hurricane at less cost if the preparation were undertaken with *τ* > *τ*_{crit}. For example, the U.S. Navy could prepare its ships to sortie at less cost without paying overtime, and perhaps steaming at lower, more fuel-efficient speeds if they decided to order the sortie earlier. We assume that the preparation cost at *τ*_{crit} is the minimum preparation cost.

^{5}

Cost of preparation could alternatively be modeled as a function of actual lead time, in which case its value will be uncertain at the time of the decision. That choice would reflect that the cost of the preparation action may be due to partially complete or less effective protection such that some portion of the mitigative loss would be incurred. This portion of the cost of the protective action would be a function of the time until the hurricane strikes.

^{6}

Because *τ*_{crit} reflects the lead time required to complete a preparation action before a storm strikes, it would include the lead time required to implement the action plus any additional buffer necessary; for example, hurricane force winds arrive about 10 h before the storm’s center (Powell and Aberson 2001).