Abstract

Standard indices used in the National Fire Danger Rating System (NFDRS) and Fosberg fire-weather indices are calculated from Weather Research and Forecasting (WRF) model simulations and observations in interior Alaska for June 2005. Evaluation shows that WRF is well suited for fire-weather prediction in a boreal forest environment at all forecast leads and on an ensemble average. Errors in meteorological quantities and fire indices marginally depend on forecast lead. WRF’s precipitation performance for interior Alaska is comparable to that of other mesoscale models applied to midlatitudes. WRF underestimates precipitation on average, but satisfactorily predicts precipitation ≥7.5 mm day−1, the threshold considered to reduce interior Alaska’s fire risk for several days. WRF slightly overestimates wind speed, but captures the temporal mean behavior accurately. WRF predicts the temporal evolution of daily temperature extremes, mean relative humidity, air and dewpoint temperature, and daily accumulated shortwave radiation well. Daily minimum (maximum) temperature and relative humidity are slightly overestimated (underestimated). Fire index trends are suitably predicted. Fire indices derived from daily mean predicted meteorological quantities are more reliable than those based on predicted daily extremes. Indirect evaluation by observed fires suggests that WRF-derived NFDRS indices reflect the variability of fire activity.

1. Introduction

Fire-weather forecasters provide information to local fire management authorities on the potential for wildfires to plan prescribed burns, alert the public, and assign fire fighters. For day-to-day decisions, fire-weather forecasters and local authorities rely on analyses of observations, routine numerical weather prediction (NWP), and satellite and radar data loops (Vidal et al. 1994; Hufford et al. 1998; Boles and Verbyla 2000; Carlson and Burgan 2003). Fire risk assessment requires combining the nonmeteorological (fuel availability, fuel type, terrain slope, etc.) and meteorological conditions that affect the initiation, spread, and difficulty of fire control into fire indices that reflect protection requirements. The National Fire Danger Rating System (NFDRS; Deeming et al. 1977; Cohen and Deeming 1985; Burgan 1988; National Wildfire Coordinating Group 2002) provides indices that rate the potential, over an area, for a fire to ignite, spread, and require suppression.

Humans or lightning can ignite fires. The likelihood for human-initiated fire can only be assessed from experience with a region of a given population density; lightning formation and frequency, however, depend on clouds with severe graupel and updraft formation (Houze 1993; Berdeklis and List 2001; Fehr et al. 2003). Atmospheric instability and moisture availability are critical for developing convective clouds and severe precipitation (Crook 1996; Mölders and Kramm 2007). Convective clouds may cause lightning strikes and fire, while severe precipitation reduces fire risk. NWP can provide information on instability, moisture availability, convection, and precipitation.

Fire-weather forecasters face several challenges. Routine NWP is only available at coarser resolution than desirable for local fire-weather forecasts; model resolution, however, influences the reliability of predicted fire-weather conditions and fire risk assessments (Speer et al. 1996; Hoadley et al. 2004, 2006; Roads et al. 2005). In remote regions, meteorological data may only be available for nonrepresentative, but easily accessible, locations, or may be too sparse to reliably assess regional fire risks. Continuous data are required to assess the effects of fuel drying/wetting and to forecast fire danger for the next day. Thus, missing observations compromise the reliability of fire risk assessment.

Especially in regions of sparse data, deriving fire indices from special high-resolution NWP seems attractive for fire risk assessment. This case study assesses the feasibility of the Weather and Research Forecasting (WRF) model (Michalakes et al. 2001, 2004; Wicker and Skamarock 2002; Klemp and Skamarock 2004; Skamarock et al. 2005; Klemp et al. 2007) to predict the June 2005 fire risk for interior Alaska, a region of sparse observations and long fire disturbance history (Lynch et al. 2003; Mölders and Kramm 2007). Fire indices are determined and evaluated based on thirty 5-day WRF simulations and observational data. Since errors in predicted meteorological quantities may propagate into calculated fire indices, skill scores are determined to identify error sources.

In June 2005, interior Alaska faced its second consecutive year with widespread wildfire activity, the third worst fire season in Alaska’s recorded history. Over 4.4 × 106 acres burned across Alaska; 3.8 × 106 of those acres were in interior Alaska.

2. Model description and initialization

a. Model setup

Cloud and precipitation formation processes are simulated using a five-water-class bulk microphysics parameterization (Thompson et al. 2004). The shortwave radiation scheme (Dudhia 1989) considers cloud optical depth, cloud albedo, clear-sky absorption, and scattering. The Rapid Radiative Transfer Model (Mlawer et al. 1997) accounts for multiple bands, trace gases, and microphysics species in determining longwave radiation. Atmospheric boundary layer processes are dealt with by the Yonsei University scheme, a nonlocal K scheme with an explicit entrainment layer and parabolic K profile in the unstable mixed layer; the surface-layer physics follow Monin–Obukhov similarity theory in conjunction with the Carlson–Boland viscous sublayer (Skamarock et al. 2005). A modified version of the Rapid Update Cycle land surface model (Smirnova et al. 1997, 2000) calculates heat and moisture exchange at the land–atmosphere interface and soil temperature and moisture states under consideration of frozen soil physics.

b. Simulations

The model domain (Fig. 1) covers interior Alaska with 144 × 88 grid points in the horizontal direction with a grid increment of 4 km and 31 vertical layers from the surface to 50 hPa. Hoadley et al. (2006) recommended this grid spacing for calculating fire indices. The time step is 24 s.

Fig. 1.

(a) Schematic view of the model domain location within AK. The model domain extends 576 km in the east–west and 352 km in the north–south directions. (b) Topography as assumed in the simulations. Gray shades have 150-m spacing with less than 150 m being the lightest and greater than 1350 m being the darkest colors. The lowest and highest elevations are 48 and 1492 m, respectively. Crosses indicate grid cells in which observational sites are available. (c) Land-cover types from light gray to black: shrub-land, deciduous forest, coniferous forest, mixed forest, water, and wooded tundra. Triangles indicate locations where fires ignited.

Fig. 1.

(a) Schematic view of the model domain location within AK. The model domain extends 576 km in the east–west and 352 km in the north–south directions. (b) Topography as assumed in the simulations. Gray shades have 150-m spacing with less than 150 m being the lightest and greater than 1350 m being the darkest colors. The lowest and highest elevations are 48 and 1492 m, respectively. Crosses indicate grid cells in which observational sites are available. (c) Land-cover types from light gray to black: shrub-land, deciduous forest, coniferous forest, mixed forest, water, and wooded tundra. Triangles indicate locations where fires ignited.

The simulations use the 1.0° × 1.0° and 6-h resolution National Centers for Environmental Prediction (NCEP) global final analyses (FNL) as the initial and boundary conditions. To obtain 24-, 48-, 72-, 96-, and 120-h forecasts for each day in June 2005, simulations start at 0600 UTC 28 May [2100 Alaska standard time (AST) 27 May]. From then onward, WRF is run every day for a 5-day simulation with the last simulation starting on 30 June. This procedure yields a dataset encompassing “150 June days” for evaluation (30 days for each forecast lead). It permits examining the impact of forecast length on and the value of ensemble means for fire-weather forecasts. Since all observations refer to AST, the analysis starts at 0900 UTC (0000 AST) 1 June.

c. Synoptic situation

In interior Alaska, the April–May 2005 snowpack water equivalent exceeded the long-term average appreciably (Knight et al. 2005a, b). Snowmelt was completed between 26 and 29 April in the valleys and hills, respectively (Alaska Climate Center 2007); green-up followed on 4 May (T. Fathauer 2007, personal communication).

At the beginning of June, low pressure systems over the Aleutians and British Columbia governed the synoptic situation. High pressure over the Arctic Ocean strengthened, while the lows moved southward, reducing the pressure gradient over Alaska. On 6 June, a low over the Gulf of Alaska, and an Aleutian low together with a surface trough over interior Alaska yielded unsettled weather with some showers (Fig. 2). A ridge built up over Alaska in the following days. It started moving into Canada on 7 June, while an Aleutian low increased moisture in southern Alaska. This weakening low pressure system governed interior Alaska for several days. A weak ridge aloft over interior Alaska limited isolated convection to the area near the surface trough located across interior Alaska. On 11 June, a ridge formed over the Bering Sea. It moved slowly westward, while the low over interior Alaska dissipated. A new Aleutian low formed on 13 June and moved slowly westward, thereby weakening the ridge. On 15–16 June the ridge reaching from the Bering Sea to the Gulf of Alaska provided hot and dry weather along with some thunderstorms. It strengthened and moved westward, while the Aleutian low moved northward. On 20 June, the ridge broke down. Low pressure systems arrived over southern Siberia and from the Canadian border providing precipitation to interior Alaska (Fig. 2). The low over interior Alaska moved north, while the Siberian low moved south as a ridge developed over Alaska on 21 June. On 22 June, a disturbance moved through northeast interior Alaska. In the following days, alternate weakening and strengthening of a ridge over Alaska governed the situation. The ridge provided warm conditions along with afternoon thunderstorms over parts of interior Alaska. Red flag warnings were issued because of strong northeast winds over the eastern upper Yukon River valley and southeastern Brooks Range. On 26 June, low pressure with weak precipitation influenced interior Alaska (Fig. 2). The next day, high pressure built over interior Alaska causing warmer and drier weather. An Aleutian low moved in on 28 June. At the end of June, an upper-level ridge persisted over northern Alaska with a low in the Gulf of Alaska feeding moisture into interior Alaska.

Fig. 2.

(a) Time series of observed (diamonds) and predicted (solid line) precipitation averaged over all sites for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Note that the other forecast lead times and the ensemble show similar behavior. Skill scores are given for various precipitation thresholds: (b) BS, (c) TS, and (d) HS. Plots are based on all sites and days in June 2005 for which data were available. The black (gray) solid, dotted, and dashed lines stand for the 24-, 48-, and 72-h forecast lead times (96- and 120-h forecast lead times, and the ensemble), respectively.

Fig. 2.

(a) Time series of observed (diamonds) and predicted (solid line) precipitation averaged over all sites for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Note that the other forecast lead times and the ensemble show similar behavior. Skill scores are given for various precipitation thresholds: (b) BS, (c) TS, and (d) HS. Plots are based on all sites and days in June 2005 for which data were available. The black (gray) solid, dotted, and dashed lines stand for the 24-, 48-, and 72-h forecast lead times (96- and 120-h forecast lead times, and the ensemble), respectively.

In interior Alaska, June 2005 was, on average, warmer with less precipitation, slightly lower relative humidity, and weaker winds than the 1949–2006 means; the first 7 days and 19–26 June were colder than the mean, while the remaining June days were warmer than the mean (Alaska Climate Center 2007). July was relatively wet for interior-Alaska conditions.

3. Experimental design

a. Data

Because of interior Alaska’s low population density (0.04 inhabitants per square kilometer), observations are sparse. Daily mean wind speed; mean, maximum, and minimum relative humidity; and mean, maximum, and minimum air and fuel temperatures are available at 29 sites (Fig. 1). Pressure and daily accumulated shortwave downward radiation were reported at 10 and 13 of these sites, respectively.

Precipitation measurements exist from 7 first-class stations and 33 volunteer-run (mesonet) sites (Fig. 1). Despite the unknown quality, the evaluation includes mesonet data to increase the spatial coverage from 0.00004 to 0.0004 gauges per square kilometer. Note that the World Meteorological Organization (1974) recommends a minimum precipitation gauge density of 0.0001–0.00067 gauges per square kilometer for polar regions.

b. Fire indices

The Fosberg (1978) fire-weather index (FFWI) assumes grass as the land-cover type and depends on current weather, namely wind speed and equilibrium-moisture content. The latter is a nonlinear function of relative humidity and air temperature.

Fuel buildup relates to previous weather (Stocks et al. 1998, 2000; Hess et al. 2001; Westerling et al. 2003). While the FFWI lacks such information, the modified FFWI (mFFWI) considers a fuel-availability factor (Goodrick 2002) that evaluates soil and duff-layer dryness. This factor depends on the Keetch–Byram drought index (KBDI; Keetch and Byram 1968). The KBDI depends on annual mean precipitation and the previous day’s KBDI, decreases proportionally to daily accumulated precipitation, and increases for days without rain depending on the daily maximum temperature.

Standard fire indices of NFDRS [see Cohen and Deeming (1985) for equations] are the spread component (SC), the energy release component (ERC), the burning index (BI), and the ignition component (IC). The SC evaluates the maximum rate at which a fire moves forward for the given conditions. It is the most variable of the indices, with its daily changes related to wind speed, fine fuel moisture, and live woody fuel moisture. ERC approximates the amount of heat released during the passage of the flaming front. This cumulative index depends on the entire fuel complex and is the least variable on a day-by-day basis. The BI is 10 times the theoretical flame length, an indicator of the potential effort required to suppress the fire. The BI is sensitive to wind speed and is highly variable with time. The IC expresses the probability that a fire requiring suppression will be ignited. It ranges between 0 (no firebrand; cool, damp conditions) and 100 (dry, windy days). The IC strongly fluctuates on a daily basis because it depends on fuel moisture content and wind speed.

In this case study, FFWI, mFFWI, SC, ERC, BI, and IC are all determined from predicted and observed quantities to assess WRF’s suitability for interior Alaska fire-weather forecasts. Because daily extremes of temperature and humidity influence the fire risk, FFWI and mFFWI are also determined for maximum temperature and minimum relative humidity (subscript max) and for minimum temperature and maximum relative humidity (subscript min). The former assesses the hottest/driest conditions of a day, usually around local noon. The latter corresponds to the wettest/coolest conditions, usually at nighttime when most thunderstorms occur in Alaska (McGuiney et al. 2005).

c. Analysis

The scientific community has gone to great effort to evaluate WRF; WRF simulations compare favorably with analytical solutions, results obtained by the thoroughly evaluated fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5) and NCEP’s Eta Model and their observations (Done et al. 2004; Klemp and Skamarock 2004; Knievel et al. 2004; Cheng and Steenburgh 2005; Grell et al. 2005; Kusaka et al. 2005; Davis et al. 2006; Kain et al. 2006).

The main advantage of NFDRS indices over Fosberg-type indices is that they depend on more accumulated parameters. However, as in any multiquantity index and/or index depending on accumulated features, incorrect predictions (or measurement errors) may propagate, thereby introducing uncertainty into the calculated quantity (Mölders et al. 2005). Thus, any calculated fire indices are no better than the data used to produce them, and the aforementioned advantage can turn into a disadvantage. Therefore, WRF’s feasibility for predicting fire-risk-relevant meteorological data (precipitation, air temperature, relative humidity, wind, and daily accumulated shortwave downward radiation) is evaluated. Fire indices derived from meteorological observations are compared to those calculated from predictions. For simplicity, fire indices determined from predictions and observations are denoted “predicted” and “observed,” respectively.

To evaluate the quantitative forecast skill (e.g., temperature, wind speed) the bias, root-mean-square error (RMSE), correlation skill score, and standard deviation of error (SDE) are calculated (Table 1). Bias indicates systematic errors resulting from model parameters, deficiencies, parameterizations, and numerical approximations. For a perfect forecast the difference between the ith predicted and observed quantity is zero for all i. Note that the bias can be zero if the sum of the negative differences equals that of the positive differences. RMSE evaluates the overall performance. Since bias and variance contribute to RMSE, RMSE is very sensitive to systematic and large errors (e.g., incorrect extremes). The correlation skill score, r, evaluates how well the values of the forecast F correspond to those of the verifying observation (predictand), P. This score is insensitive to some systematic errors like a constant difference between F and P or their amplitudes (von Storch and Zwiers 1999). Thus, perfect correlation exists for r = 1 and r = −1. SDE indicates the variability of random errors relative to the bias, in a sense similar to the RMSE. Both known and unknown error sources can contribute to SDE. A source of random error can be uncertainty in the initial and boundary conditions and/or observations. However, the experimental design of this study does not exclusively limit random errors to these sources. According to von Storch and Zwiers (1999), a perfect forecast has F = P, r = 1, RMSE = 0, and 100% of the variance is explained.

Table 1.

Equations used to evaluate the quantitative and categorical skill levels of forecasts (e.g., Anthes 1983; Anthes et al. 1989; Hanna 1994; Wilks 1995; von Storch and Zwiers 1999). Here, ϕi gives the difference between the ith predicted and observed quantities and n is the number of observations. Here, F and P denote the forecast and the verifying observation (predictand); cov(F, P), var(F), and var(P) stand for the covariance between F and P and the variance of F and P, respectively. Here, N1 is the number of forecasted events that occurred, N2 is the number of forecasted events that did not occur, N3 is the number of events that occurred but are forecasted to not occur, and N4 is the number of events that did not occur and are forecasted to not occur (cf. Table 2). For a perfect forecast N2 = N3 = 0. Multiplication of accuracy by 100 gives the percentage of correct forecasts. See text for further details on skill scores.

Equations used to evaluate the quantitative and categorical skill levels of forecasts (e.g., Anthes 1983; Anthes et al. 1989; Hanna 1994; Wilks 1995; von Storch and Zwiers 1999). Here, ϕi gives the difference between the ith predicted and observed quantities and n is the number of observations. Here, F and P denote the forecast and the verifying observation (predictand); cov(F, P), var(F), and var(P) stand for the covariance between F and P and the variance of F and P, respectively. Here, N1 is the number of forecasted events that occurred, N2 is the number of forecasted events that did not occur, N3 is the number of events that occurred but are forecasted to not occur, and N4 is the number of events that did not occur and are forecasted to not occur (cf. Table 2). For a perfect forecast N2 = N3 = 0. Multiplication of accuracy by 100 gives the percentage of correct forecasts. See text for further details on skill scores.
Equations used to evaluate the quantitative and categorical skill levels of forecasts (e.g., Anthes 1983; Anthes et al. 1989; Hanna 1994; Wilks 1995; von Storch and Zwiers 1999). Here, ϕi gives the difference between the ith predicted and observed quantities and n is the number of observations. Here, F and P denote the forecast and the verifying observation (predictand); cov(F, P), var(F), and var(P) stand for the covariance between F and P and the variance of F and P, respectively. Here, N1 is the number of forecasted events that occurred, N2 is the number of forecasted events that did not occur, N3 is the number of events that occurred but are forecasted to not occur, and N4 is the number of events that did not occur and are forecasted to not occur (cf. Table 2). For a perfect forecast N2 = N3 = 0. Multiplication of accuracy by 100 gives the percentage of correct forecasts. See text for further details on skill scores.

The categorical skill (e.g., occurrence or nonoccurrence of an event; Table 1) is evaluated for different thresholds of precipitation and fire indices. A perfect forecast means the number of forecasted events that did not occur (N2) and the number of events that occurred but were forecasted to not occur (N3) are both zero (Table 2). The bias score (BS) measures the ratio of the frequency of forecast events to that of observed events. It indicates the tendency to over- (BS > 1) or underpredict (BS < 1). When the BS = 1, then N1 + N2 = N1 + N3, where N1 is the number of forecasted events that occurred. Consequently, for N2 = N3 > 0, the forecast is imperfect. The categorical score (CS) measures the success of the forecast to discriminate between events and nonevents. This measure is good for evaluating rare events. For CS = 0, all predictions are incorrect. For CS = 1, the prediction is perfect only in the case of the event because CS does not account for false alarms, that is, for N2 > 0 and CS = 1 the forecast is imperfect. The threat score (TS) measures the success in correctly predicting an event at a point. It is sensitive to hits and penalizes misses and false alarms. The TS typically provides poorer scores for rarer than more frequent events because some hits can result from plain random chance. The Heidke skill score (HS) measures the fraction of correct forecasts after eliminating forecasts that would be correct by random chance. For worse than random chance, HS < 0, for random chance HS = 0, and for perfect forecasts HS = 1. Accuracy gives the fraction of correct forecasts. Accuracy can be misleading because it is heavily influenced by the most common category, usually “no event” in the case of rare events (e.g., high fire risk, heavy precipitation).

Table 2.

Contingency table used to evaluate the categorical skill for forecasting an event exceeding a given threshold.

Contingency table used to evaluate the categorical skill for forecasting an event exceeding a given threshold.
Contingency table used to evaluate the categorical skill for forecasting an event exceeding a given threshold.

High fire risk does not mean that a fire will occur. Thus, actual fires are used for indirect evaluation by examining whether high fire risk is predicted where fires ignited in June 2005.

4. Fire-weather forecast evaluation

Typically all predicted quantities are less accurate close to the upwind lateral boundaries than elsewhere. WRF starts with zero cloud and precipitation particles. Like all mesoscale models, WRF requires some spinup time for cloud and precipitation formation. Consequently, WRF underestimates absorption of radiation by clouds and precipitation rate in the first 6 h or so and at the upwind lateral boundaries on cloudy/rainy days; it also underestimates precipitation duration for the 24-h forecast lead (Fig. 2).

a. Pressure

On average, WRF overestimates pressure by about 4 hPa (Table 3). The average elevation of the 10 sites for which pressure data are available is about 84 m less than that of the WRF grid cells they fall into. Actual and grid-cell elevations match within 100 m for six sites; the actual elevations of three pressure sites are more than 300 m less than the terrain height of the grid cell they fall into. These lower grid-cell average elevations, as compared to the actual elevation, may partly explain the overestimate for these three sites only. The higher SDE than bias and the insensitivity to the forecast lead suggest that difficulty in reproducing pressure relates to the lateral boundary conditions; that is, pressure values provided by the FNL data may be slightly too high. Differences between the actual elevation and that used in producing the FNL data may be the reason.

Table 3.

Mean and standard deviation, RMSE, SDE, bias, and correlation skill score (r) of the predicted and observed sea level pressure (p), daily accumulated shortwave downward radiation (Rs), 2-m air temperature (Ta), relative humidity (RH), and wind speed (υ) for the 24-h forecast lead time and the ensemble (in parentheses) for the monthly averages over all sites for which data were available.

Mean and standard deviation, RMSE, SDE, bias, and correlation skill score (r) of the predicted and observed sea level pressure (p), daily accumulated shortwave downward radiation (R↓s), 2-m air temperature (Ta), relative humidity (RH), and wind speed (υ) for the 24-h forecast lead time and the ensemble (in parentheses) for the monthly averages over all sites for which data were available.
Mean and standard deviation, RMSE, SDE, bias, and correlation skill score (r) of the predicted and observed sea level pressure (p), daily accumulated shortwave downward radiation (R↓s), 2-m air temperature (Ta), relative humidity (RH), and wind speed (υ) for the 24-h forecast lead time and the ensemble (in parentheses) for the monthly averages over all sites for which data were available.

b. Precipitation

On average, WRF captures the temporal precipitation evolution well, but greatly underestimates it on days with heavy precipitation (Fig. 2). Predictions are better for nonconvective than convective events. Generally, the RMSE, bias, and SDE are smaller for the ensemble than for the individual forecast leads. A strong negative bias with high RMSE exists during frontal passages with high precipitation rates, while SDE is relatively low during the same time. Convective events show positive bias with relatively higher SDE than for frontal events. For convective events, the RMSE differs widely among various forecast leads for a given day. Despite actual and model terrain heights that differ by more than ±100 m for 43% of the precipitation sites, no obvious relation between SDE, bias, or RMSE and terrain height differences exists.

Typically, about 7.5 mm day−1 of precipitation reduces the fire danger for several days. For long-lasting dry periods, 12.5–19 mm day−1 removes the fire danger for a week. Therefore, our evaluation focuses on high thresholds. The overall accuracy is greatest for long (>72 h) forecast leads for low thresholds (<7.5 mm day−1) and for short forecast leads for high thresholds. For 7.5, 10, 12.5, 15, and 20 mm day−1, the accuracy exceeds 90%, 93%, 95%, 96%, and 97%, respectively. The ensemble accuracy is similar to that of the individual forecasts.

Typically, categorical scores range between 72% and 96% for ≥0.25 mm day−1 (e.g., Anthes et al. 1989). WRF detects 75% of the sites for thresholds >0.1 mm day−1 for the 24-h forecast lead and for the ensemble. This means that for this case study, WRF performs at the lower end of the range of typical success rates at forecasting the occurrence of precipitation that exceeds a threshold at a given point. The probability of detecting sites with ≥7.5 mm day−1 is between 16% and 22% for the 24- and 120-h forecast leads and 19% for the ensemble.

Typical threat scores for a 24-h forecast and precipitation thresholds ≥0.25 and 2.5 mm day−1 are 35% and 20%, respectively (e.g., Anthes 1983). Zhong et al. (2005) reported threat scores of ∼38% for precipitation ≥0.25 mm day−1 for 48-h MM5 simulations with the same horizontal resolution as that used here; percentages decreased slightly at higher thresholds. WRF achieves similar scores for the 0.1 mm day−1 threshold. It captures more than 21% (20%) and 25% of the points correctly for precipitation >0.25 mm day−1 for the 24- (48-) h forecast leads and ensemble (Fig. 2). For the 2.5 mm day−1 threshold, the threat scores amount to 16% (19%) and 18% for the 24- (48-) h forecast. WRF’s skill in predicting whether a location will receive a threshold amount of precipitation decreases slightly with increasing forecast lead and/or threshold. For most thresholds, the ensemble captures the location better than do individual forecast leads. However, for thresholds ≥10 mm day−1, the 24-h forecast lead is best.

WRF underestimates precipitation frequency for all thresholds ≥0.25 mm day−1 with decreasing bias scores for thresholds of up to 10 mm day−1 for all forecast leads and of up to 15 mm day−1 for the long forecast leads (Fig. 2). WRF captures frequency better for precipitation ≥5 mm day−1 than for lower rates, and performs better at longer than shorter forecast leads.

For summer (winter), Zhong et al. (2005) obtained HSs for thresholds between 0.25 and 10 mm day−1 that ranged between 0.26 and 0.17 (0.32 and 0.24), respectively. For June 2005, the HSs of these thresholds are 0.22 and 0.31 for the 24-h forecast lead and 0.24 and 0.27 for the ensemble (Fig. 2). Generally, the ensemble yields higher HSs than individual forecast leads except for thresholds ≥7.5 mm day−1 for which the 24-h forecast is best.

In summary, WRF 1) acceptably captures the location of precipitation with the ensemble, 2) underestimates the number of points receiving precipitation and on average the amount, 3) captures high precipitation amounts (≥7.5 mm day−1) and locations receiving them best with the 24-h forecast lead, and 4) exhibits a level of precipitation performance for interior Alaska that is comparable to that of other mesoscale models applied to midlatitudes.

c. Shortwave downward radiation

WRF overestimates the accumulated shortwave downward radiation on average by about 10% for all forecast leads. The overestimation is highest for the 24-h forecast lead (Table 3) because of the spinup time required for cloud formation. Nevertheless, forecast leads differ by less than 500 W m−2 for the same day. For all forecast leads, WRF predicts accumulated shortwave downward radiation excellently for 9–11, 13–15, 17–18, 20, and 23 June; on the remaining days, it slightly overestimates this quantity (Fig. 3). This means that the predicted and observed daily accumulated shortwave downward radiation will agree better if the interior Alaska weather is governed by a high pressure ridge. Discrepancies in the predicted and actual cloudiness play a role.

Fig. 3.

(a) Time series of observed (diamonds) and predicted (solid line) daily accumulated shortwave downward radiation for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Plots for other forecast lead times or the ensemble look similar. Values shown represent averages over all sites with data. Comparison of discrepancies between predicted and observed daily accumulated shortwave downward radiation ΔRs and (b) differences between predicted and observed 2-m air temperatures, (c) differences between predicted and observed maximum air temperatures, and (d) differences between predicted and observed minimum air temperatures. Plus signs, asterisks, diamonds, triangles, squares, and crosses stand for the 24-, 48-, 72-, 96-, and 120-h forecast leads and the ensemble, respectively.

Fig. 3.

(a) Time series of observed (diamonds) and predicted (solid line) daily accumulated shortwave downward radiation for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Plots for other forecast lead times or the ensemble look similar. Values shown represent averages over all sites with data. Comparison of discrepancies between predicted and observed daily accumulated shortwave downward radiation ΔRs and (b) differences between predicted and observed 2-m air temperatures, (c) differences between predicted and observed maximum air temperatures, and (d) differences between predicted and observed minimum air temperatures. Plus signs, asterisks, diamonds, triangles, squares, and crosses stand for the 24-, 48-, 72-, 96-, and 120-h forecast leads and the ensemble, respectively.

Daily accumulated shortwave radiation is used to determine the state of the weather (SOW). An empirical relationship between precipitation and shortwave downward radiation is solved for shortwave radiation to parameterize SOW for sites without SOW or shortwave radiation reports. Incorrectly predicted shortwave downward radiation affects SOW and, consequently, predicted temperature and relative humidity at the fuel–atmosphere interface.

d. Temperature

WRF captures the temporal evolution of the daily mean, maximum, and minimum temperatures well (Fig. 4) for all forecast leads. Mean and maximum temperature forecasts are best for the 120-h forecast lead time, but minimum temperature forecasts are best for the 24-h forecast lead time. On average, individual forecast leads differ by less than 1, 1, 1.5, and 0.7 K from each other for the daily mean, maximum and minimum temperature, and dewpoint temperature, respectively. Predicted and observed daily mean temperatures correlate strongly and RMSEs remain ≤2.8 K (Table 3). For daily maximum temperatures, the correlation still is ≥0.819, but the RMSE is ∼2.5 K higher than for daily mean temperatures. Daily minimum temperatures have RMSEs < 3.6 K and r > 0.604.

Fig. 4.

Time series of observed (diamonds) and predicted (solid line) (a) 2-m air temperature, (b) maximum temperature, (c) minimum temperature, and (d) daily average dewpoint temperature for the 24-h forecast lead time. Values shown represent averages over all sites with data. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Plots for other forecast lead times or the ensemble look similar.

Fig. 4.

Time series of observed (diamonds) and predicted (solid line) (a) 2-m air temperature, (b) maximum temperature, (c) minimum temperature, and (d) daily average dewpoint temperature for the 24-h forecast lead time. Values shown represent averages over all sites with data. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Plots for other forecast lead times or the ensemble look similar.

From an ensemble average, WRF overestimates the minimum temperature by 1.7 K, but underestimates the mean and maximum temperatures by 1.5 K (Table 3) and 4.2 K, respectively. A similar bias exists for individual forecast leads. Some systematic error results from the misrepresentation of free convection, stability, cloudiness, and soil types. Note that in interior Alaska, organic soil exists widely, while WRF, like other mesoscale models, only considers mineral soils. Substantial overestimation of the minimum temperature coincides with substantial overestimation of the daily accumulated solar radiation; substantial underestimation of the maximum and daily mean temperatures correlates with substantial underestimation of the daily accumulated solar radiation (Figs. 3 and 4). Furthermore, discrepancies between grid-cell and site elevations contribute to the bias. At 14 of the 29 sites with temperature measurements, the actual and model elevations differ by more than 100 m absolute. On average for these sites, the grid-cell and actual height differ by 162.4 m, which explains about 1 K of the difference. Some bias relates to land cover. Sites are usually in large, open grass plots, while WRF assumes the dominant land cover within a grid cell to be representative for the exchange of heat, water vapor, and momentum at the atmosphere–surface interface. During the night, local effects (terrain and land cover) can be decisive for inversion or dew formation; around local noon the land cover has the greatest impact on the partitioning of the solar radiation into the ground heat flux, and the sensible and latent heat fluxes, with consequences for temperature, relative humidity, and convection. At night, obviously model deficits (due to the misinterpretation of the local conditions) have a stronger impact on model performance than they have in the morning or evening because other stability parameterizations are used for the typically stable conditions at night than for the neutral or already/still unstable conditions in the morning/evening.

On average, the ensemble SDE amounts to 2.1, 3.2, and 3.4 K for the daily mean (Table 3), minimum, and maximum temperatures, respectively. The SDE is similar for the individual forecast leads. The higher random errors for the daily extremes than for the averages indicate that initialization of the soil conditions may be a reason for not capturing the diurnal amplitude. The sensible heat flux depends on the difference between the near-surface air and the surface temperatures, while the ground heat flux is proportional to the difference between the soil temperature in the uppermost soil layer and the soil surface. Consequently, initializing the soil as too warm or cold may affect the sensible and ground heat fluxes and even invert their signs. In permafrost, also, the incorrect partitioning of the total soil water into an ice and liquid fraction can result in energy going into the thawing rather than the warming of the soil and vice versa (Mölders and Romanovsky 2006) with consequences for the sensible and ground heat fluxes at the surface. For all of these reasons, the incorrect initialization of the soil conditions may yield a warming of near-surface air when in reality there should be cooling and vice versa.

In summary, WRF underestimates diurnal temperature amplitudes but captures daily averages well. Therefore, if fire indices are calculated based on WRF forecasts, they may be more trustworthy when determined with daily averages of forecasted meteorological quantities than those based on forecasts of daily extremes. However, the key idea of rating fire risk is that ratings represent near-worst-case conditions at or near the peak of the normal burning period. Thus, improvements in WRF’s prediction of temperature extremes have high potential for reducing the discrepancies between WRF-derived and observation-derived fire indices.

WRF captures the temporal evolution of dewpoint temperatures acceptably (Fig. 4). On average, WRF overestimates the daily average dewpoint temperature by 1.1 and 1.7 K for the 24-h forecast lead and ensemble, respectively. This bias occurs for the same reasons discussed for air temperature. RMSEs are about 3.5 K with sufficient correlation (>0.577) for all forecast leads. The high random errors (∼3.1 K) indicate that the initial and boundary conditions contribute to dewpoint temperature errors.

e. Relative humidity

Errors in temperature prediction propagate into the calculated relative humidity. WRF captures the temporal evolution of the daily mean and minimum relative humidity acceptably and that of the maximum relative humidity broadly (Fig. 5). For the majority of days, the ensemble outperforms the individual forecast leads. On average, the individual forecast leads differ by less than 5% (absolute) in predicting the daily average relative humidity. The same is true for both the maximum and minimum relative humidities.

Fig. 5.

Time series of observed (diamonds) and predicted (solid line) daily (a) average, (b) maximum, and (c) minimum relative humidity for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead and vertical bars for the observations. Values shown represent averages over all sites with data. Plots for other forecast lead times or the ensemble look similar.

Fig. 5.

Time series of observed (diamonds) and predicted (solid line) daily (a) average, (b) maximum, and (c) minimum relative humidity for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead and vertical bars for the observations. Values shown represent averages over all sites with data. Plots for other forecast lead times or the ensemble look similar.

Discrepancies in the timing and position of frontal systems cause discrepancies especially in the precipitation and maximum relative humidity (Figs. 2 and 5). After a rain event, over- or underpredicted precipitation results in relative humidity errors. Underestimated interception loss may contribute to relative humidity that is too low after precipitation. Note that in forests under the same environmental conditions, evaporation of intercepted water may be several times greater than transpiration (Stewart 1977).

Maximum values of the predicted and observed relative humidities have lower correlation and greater RMSE than are found for minimum or daily mean relative humidities. However, the correlations still exceed 0.402 and the RMSEs remain below 20% (absolute). When looking at the minimum, maximum, and daily mean relative humidities, the minimum values of the predicted and observed relative humidities exhibit the highest correlation (r ≥ 0.683, RMSE ≤ 17%), but the RMSEs are the lowest for the daily mean relative humidity (≤14%, r ≥ 0.648). The 24-h forecast lead and ensemble provide the best results most of the time (Table 3). On average, WRF overestimates the daily minimum and mean relative humidities by 12% and 2%, respectively, while it underestimates the maximum relative humidity by 11%. The higher bias for extremes than for daily averages may result from systematic errors in land cover and terrain representation. The SDEs of the minimum, mean, and maximum relative humidities are 12%, 13% (Table 3), and 17%, respectively; that is, the random error from the lateral boundary conditions affects the performance.

f. Wind

On average, the wind direction is biased ∼10° to the north. However, local differences may be much greater due to channeling effects and differences between the model and actual terrain height. The accuracy of the wind direction shows little dependency on forecast lead time.

On average, the daily mean wind speed differs by less than 0.2 m s−1 among the individual forecast lead times. WRF captures the temporal evolution of the daily mean wind speed well except on 5, 12, and 28 June (Fig. 6). On 12 (28) June, the wind speed strongly (slightly) increases in the model, but decreases in nature; on 5 June, the wind speed slightly increased, but WRF predicts a slight decrease.

Fig. 6.

Time series of observed (diamonds) and predicted (solid line) daily average wind speed for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Values shown represent averages over all sites with data. Plots for other forecast lead times or the ensemble look similar.

Fig. 6.

Time series of observed (diamonds) and predicted (solid line) daily average wind speed for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Values shown represent averages over all sites with data. Plots for other forecast lead times or the ensemble look similar.

As is typical for complex terrain, the predicted and observed wind speeds correlate weakly on a day-by-day or site-by-site basis because of elevation differences between the real and model worlds and other local effects (e.g., channeling, misinterpretation of roughness length). Furthermore, the terrain is smoother in any mesoscale model than in nature. However, the correlation determined over all sites and days for the various forecast leads and the ensemble ranges between 0.462 (48-h forecast lead) and 0.483 (24-h forecast lead). The aforementioned systematic differences yield a positive bias of ∼0.8 m s−1 (Table 3). The SDE is about 1.06 m s−1; that is, the boundary conditions cause errors. Generally, RMSEs are less than 1.4 m s−1 for all forecast leads and the ensemble; that is, they remain below 3.2 m s−1 for 24-h forecasts and 4.4 m s−1 for 72-h forecasts as reported by Anthes et al. (1989) or 1.57 m s−1 for 24-h forecasts as reported by Zhong and Fast (2003). This seemingly better performance of the WRF than other models, however, may be an artifact of climatologically low wind speed in interior Alaska in general and in June 2005 in particular; the June average wind speed for Fairbanks is 3.3 m s−1.

5. Fire indices evaluation

WRF data and observation derived fire indices do not differ significantly. For all fire indices, the spatial standard deviation increases with time for the predictions and observations because the level of fire danger develops differently at the various sites (Figs. 7 and 8).

Fig. 7.

Time series of the observed (diamonds) and predicted (solid line) daily (a) standard, (b) minimum, (c) maximum, and (d) modified Fosberg fire-weather indices for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Values shown represent averages over all sites with data. Plots for other forecast lead times or the ensemble look similar. Note that y-axis labels differ among the panels. In Alaska, FFWIs > 3, 13, 23, and 28 mean moderate, high, very high, and extreme fire risk, respectively.

Fig. 7.

Time series of the observed (diamonds) and predicted (solid line) daily (a) standard, (b) minimum, (c) maximum, and (d) modified Fosberg fire-weather indices for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Values shown represent averages over all sites with data. Plots for other forecast lead times or the ensemble look similar. Note that y-axis labels differ among the panels. In Alaska, FFWIs > 3, 13, 23, and 28 mean moderate, high, very high, and extreme fire risk, respectively.

Fig. 8.

Time series of the observed (diamonds) and predicted (solid line) daily (a) spread component, (b) energy release component, (c) ignition component, and (d) burning index for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Values shown represent averages over all sites with data. Plots for other forecast lead times or the ensemble look similar.

Fig. 8.

Time series of the observed (diamonds) and predicted (solid line) daily (a) spread component, (b) energy release component, (c) ignition component, and (d) burning index for the 24-h forecast lead time. Standard deviations with respect to the sites are represented by the gray area for the 24-h forecast lead time and vertical bars for the observations. Values shown represent averages over all sites with data. Plots for other forecast lead times or the ensemble look similar.

Errors are within the range of observational uncertainty (Figs. 7 –9). The SDE, bias, and RMSE increase marginally with time for KBDI, SC, ERC, IC, and BI because errors propagate for “accumulative” quantities. However, error increases are relatively small compared to the errors. Errors in predicted precipitation are the main reason for errors in KBDI, while relative humidity and precipitation errors accumulate for SC, ERC, IC, and BI through the calculated 100- and 1000-h fuel moisture.

Fig. 9.

Scatterplots of the (a) mFFWI, (b) SC, (c) ERC, (d) IC, and (e) BI, as well as (f) categorical skill scores for various ignition component thresholds for the 24-h forecast lead time. Plots are based on all sites and days in June 2005 for which data were available. In (a)–(e) the one-to-one line is superimposed; in (f) the black solid, dotted, and dashed lines represent the threat scores, accuracy, and categorical scores, respectively. Values on the y axis vary among the panels. Other forecast leads and the ensemble show similar behavior; other Fosberg-type indices show similar behavior as mFFWI (therefore not shown).

Fig. 9.

Scatterplots of the (a) mFFWI, (b) SC, (c) ERC, (d) IC, and (e) BI, as well as (f) categorical skill scores for various ignition component thresholds for the 24-h forecast lead time. Plots are based on all sites and days in June 2005 for which data were available. In (a)–(e) the one-to-one line is superimposed; in (f) the black solid, dotted, and dashed lines represent the threat scores, accuracy, and categorical scores, respectively. Values on the y axis vary among the panels. Other forecast leads and the ensemble show similar behavior; other Fosberg-type indices show similar behavior as mFFWI (therefore not shown).

The accuracy exceeds 80% for forecasts of high fire risk (Table 4; Fig. 9). However, high fire risk is a rare event and accuracy can be due to correct “no events” or random chance. Except for high IC (≥38) and all fire danger categories for Fosberg-type indices, the fire indices’ HSs exceed those for precipitation (Table 4; Fig. 2). The correct point is predicted for high fire risk in over 70% of the events for ERC and BI, and in less than 50% for IC, SC, and Fosberg-type indices. Threat scores decrease with increasing fire risk (Table 4) because errors in predicted meteorological quantities propagate into the indices. The sum of these findings means that after eliminating “correct forecasts” due to random chance, high fire risk is better forecasted than is its reduction by precipitation (≥7.5 mm day−1).

Table 4.

Categorical skill for various fire danger categories for the SC, BI, ERC, and mFFWI. The letters L, M, H, VH, and E stand for low, moderate, high, very high, and extreme, respectively. A long dash means that there are no values for the respective fire danger category in this case study or that the score is not determined because of too few data points being available for a meaningful statistic. No values for a class for an index may also be a result of missing values in the observations for the respective event. Note that the various scores address different aspects and include different events (see text for further details); fire indices evaluate different aspects of fire risk for which the fire danger categories may differ by a class among indices at the borderline between classes (see text for further details).

Categorical skill for various fire danger categories for the SC, BI, ERC, and mFFWI. The letters L, M, H, VH, and E stand for low, moderate, high, very high, and extreme, respectively. A long dash means that there are no values for the respective fire danger category in this case study or that the score is not determined because of too few data points being available for a meaningful statistic. No values for a class for an index may also be a result of missing values in the observations for the respective event. Note that the various scores address different aspects and include different events (see text for further details); fire indices evaluate different aspects of fire risk for which the fire danger categories may differ by a class among indices at the borderline between classes (see text for further details).
Categorical skill for various fire danger categories for the SC, BI, ERC, and mFFWI. The letters L, M, H, VH, and E stand for low, moderate, high, very high, and extreme, respectively. A long dash means that there are no values for the respective fire danger category in this case study or that the score is not determined because of too few data points being available for a meaningful statistic. No values for a class for an index may also be a result of missing values in the observations for the respective event. Note that the various scores address different aspects and include different events (see text for further details); fire indices evaluate different aspects of fire risk for which the fire danger categories may differ by a class among indices at the borderline between classes (see text for further details).

a. Fuel conditions

KBDI evaluates the accumulated moisture deficit related to previous weather. In June 2005, the KBDI increases with time in interior Alaska. The predicted KBDI marginally differs for the various forecast leads. On average, KBDI is underestimated by 12 (Table 5) because WRF underestimates the daily maximum temperature (Fig. 4). The discrepancy in KBDI increases slightly during periods with no to light precipitation and remains nearly constant otherwise.

Table 5.

Mean and standard deviation, RMSE, SDE, bias, and correlation skill score (r) of the predicted and observed KBDI; equilibrium moisture content (m) based on daily (no subscript), daytime (subscript min), and nighttime (subscript max) temperature and moisture conditions; equilibrium fuel moisture content (mfuel); fuel temperature Tf; fuel moisture RHfuel; moisture content RHXX at different time lags (where XX stands for 1, 10, 100, or 1000 h); FFWI; mFFWI based on daily (no subscript), daytime (subscript min) and nighttime (subscript max) temperature and moisture conditions; SC; ERC; IC; and BI for the 24-h forecast lead time and the ensemble (in parentheses) for the monthly average over all sites for which data were available. In AK, FFWIs of >3, 13, 23, and 28 indicate moderate, high, very high, and extreme fire risk, respectively.

Mean and standard deviation, RMSE, SDE, bias, and correlation skill score (r) of the predicted and observed KBDI; equilibrium moisture content (m) based on daily (no subscript), daytime (subscript min), and nighttime (subscript max) temperature and moisture conditions; equilibrium fuel moisture content (mfuel); fuel temperature Tf; fuel moisture RHfuel; moisture content RHXX at different time lags (where XX stands for 1, 10, 100, or 1000 h); FFWI; mFFWI based on daily (no subscript), daytime (subscript min) and nighttime (subscript max) temperature and moisture conditions; SC; ERC; IC; and BI for the 24-h forecast lead time and the ensemble (in parentheses) for the monthly average over all sites for which data were available. In AK, FFWIs of >3, 13, 23, and 28 indicate moderate, high, very high, and extreme fire risk, respectively.
Mean and standard deviation, RMSE, SDE, bias, and correlation skill score (r) of the predicted and observed KBDI; equilibrium moisture content (m) based on daily (no subscript), daytime (subscript min), and nighttime (subscript max) temperature and moisture conditions; equilibrium fuel moisture content (mfuel); fuel temperature Tf; fuel moisture RHfuel; moisture content RHXX at different time lags (where XX stands for 1, 10, 100, or 1000 h); FFWI; mFFWI based on daily (no subscript), daytime (subscript min) and nighttime (subscript max) temperature and moisture conditions; SC; ERC; IC; and BI for the 24-h forecast lead time and the ensemble (in parentheses) for the monthly average over all sites for which data were available. In AK, FFWIs of >3, 13, 23, and 28 indicate moderate, high, very high, and extreme fire risk, respectively.

The equilibrium moisture content depends on the temperature and moisture at the fuel–atmosphere interface. It serves to calculate Fosberg-type indices and various time-lag moisture contents. On average, the equilibrium moisture content that is derived from WRF-predicted extremes and daily averages depends marginally on the forecast lead. The ensemble does not provide better results than individual forecast lead times (Table 5). While there is good agreement between the predicted and observed temporal evolutions of the daily average and minimum equilibrium moisture content, the predicted and observed maximum equilibrium moisture content agree only broadly. The maximum equilibrium moisture content is underestimated most of the time. The predicted and observed equilibrium moisture contents are better correlated and the RMSEs are smaller for daily and afternoon values than for nighttime values (Table 5). While the equilibrium moisture content SDE is lowest for afternoon conditions, bias is lowest for daily averages. This behavior among the skill scores indicates that the predicted daily mean equilibrium moisture content is sensitive to random errors, while the predicted afternoon equilibrium moisture content is highly sensitive to the systematically overestimated minimum relative humidity and underestimated maximum temperature from WRF. As mentioned above, misinterpretation of local conditions causes systematic errors, and initial and boundary conditions cause random errors in predicted meteorological conditions.

The NFDRS only considers two Alaska fuel types: tundra and black spruce (cf. Cohen and Deeming 1985). Thus, whichever of these two fuel types dominates in the vicinity of a site is used as its “fuel type.” This misinterpretation of fuel type causes systematic errors in fire indices. Fuel type determines the relative importance of the different time-lag fuel moistures.

The temperature and relative humidity at the fuel–atmosphere interface serve to calculate the 10-h fuel moisture. The latter affects SC, ERC, IC, and BI. For fuel moisture and temperature, random errors from the initial and boundary conditions are higher than systematic errors (Table 5). If in WRF, fronts pass too slowly or too quickly, these offsets will cause errors in the predicted fuel temperature ranging between −2.5 and 3.7 K, on average. On a site-by-site basis, RMSE can be high and correlation low. However, on average, fuel temperature follows the observed overall long-term trend with a slightly negative bias in the first half of June and a slightly positive bias in the second half. The monthly average fuel temperature shows a slightly negative bias (−0.2 K) for the 24-h forecast lead, but slightly positive bias (0.2 K) for the ensemble (Table 5). Based on the relatively high SDEs, predicted fuel temperatures should not be considered as absolute values.

The prediction of the temporal evolution of the fuel moisture is acceptable. Fuel moisture is overestimated by up to 15% (absolute) during dry periods and underestimated by up to 7% during wet periods. The overall bias is positive (about 2%) for all forecast leads (Table 5). In most cases, the predicted and observed fuel moisture levels are more strongly correlated and have lower RMSEs for the ensemble than for individual forecast lead times. Since the general trend and timing of the fuel temperature and moisture extremes are captured well, 5-day predictions are suitable for assessing the trends of these quantities.

On average, the 100- and 1000-h fuel moisture levels and their standard deviations marginally depend on the forecast lead time. The standard deviation of the predicted values is marginally lower than that of the observed values (Table 5). The WRF-derived and observation-based 10- and 100-h fuel moisture levels do not differ noticeably. The temporal evolutions of the 1- and 10-h fuel moisture levels are captured well.

The predicted 1- and 1000-h fuel moisture levels show a slightly negative bias; the 10- and 100-h fuel moisture levels show a slightly positive bias. The changes in the 100- and 1000-h fuel moisture levels are predicted about 1 day too early. Overestimated (underestimated) 1- and 10-h fuel moisture levels result from overestimated (underestimated) precipitation. Systematic and random errors differ for the various fuel moisture time lags (Table 5) partly because of the variables they depend upon. As discussed above, the various meteorological quantities show different patterns of error behavior and errors of the meteorological quantities propagate in the quantities determined therewith. Systematic and random errors are marginally higher for 1- and 10-h fuel moisture levels than for those at 100 and 1000 h. The 100-h fuel moisture level is least sensitive to randomness (lowest SDE) of all; that is, it can reliably be assessed using WRF data. An incorrect choice of the fuel type can cause some systematic error in the equilibrium moisture content as calculated from WRF predictions and observations.

b. Fosberg-type indices

The temporal evolution of all Fosberg-type indices is predicted acceptably (Fig. 7). Since WRF fails to predict the 14 and 15 June precipitation, it predicts an FFWI increase and fails to indicate a reduction in fire danger for 15 and 16 June (Figs. 2 and 7). Most of the time the ensemble and 24-h forecast lead time perform best. Including fuel availability marginally improves the temporal evolution (Fig. 7). However, it improves the correlation between the predicted and observed indices and reduces bias, RMSE, and SDE (Table 5). Predicted and observed mFFWIs are the most strongly correlated among all Fosberg-type indices.

All predicted Fosberg-type indices show a slightly positive bias (e.g., Fig. 9), on average (Table 5), because of WRF’s slightly too warm and moist near-surface atmosphere (Figs. 4, 5 and 7). The variance is marginally lower for the predicted than for the observed FFWI and mFFWI. The standard deviations, SDEs, and RMSEs have similar magnitude (Table 5).

According to the RMSE and correlation coefficients, Fosberg-type indices calculated for daily averages are more reliable than those for daily extremes (Table 5). The underestimation of the diurnal amplitudes of the relative humidity and temperature (Figs. 4 and 5) causes errors in predicted FFWImin, mFFWImin, FFWImax, and mFFWImax. The RMSEs and SDEs are highest for FFWImin and mFFWImin; that is, random errors from initial and boundary conditions play a role. The behavior patterns of FFWImin and FFWI are similar; the same is true for mFFWImin and mFFWI (Fig. 7). Bias is highest for FFWImax and mFFWImax (Table 5) due to the systematic underestimation of the maximum relative humidity and the overestimation of the minimum temperature (Figs. 4 and 5). Because of the relatively high errors compared to the actual value (Table 5), the predicted FFWImax and mFFWImax are of no practicable value. The accuracy of all Fosberg-type indices increases as the indices increase, exceeds 90% for values >13 (high fire risk; Table 4), and differs by less than 2% among forecast lead times. Based on these findings, mFFWI is the most reliable of all predicted Fosberg-type indices.

c. NFDRS indices

WRF captures the temporal SC evolution acceptably well except on 13, 16, 18, and 28 June (Fig. 8). In the last third of June, SC shows a slightly positive bias. While the SC peaks predicted for 13 June are delayed by a day, the SC reduction is not predicted for 16, 18, and 28 June. WRF underestimates the SC on 12 and 13 June. The incorrect SC prediction for 13 June directly relates to WRF’s prediction of an increase in wind speed instead of the observed decrease (Fig. 6). On 16 and 28 June, the predicted and observed relative humidities differ strongly (Fig. 5); this yields an incorrect assessment of the fuel moisture conditions that explains the incorrect SC prediction (Fig. 8). For most days, SC based on ensemble averages outperforms SC based on individual forecast lead times. The 24-h forecast lead time provides the least reliable temporal evolution of SC; that is, the SC results shown in Fig. 8 are the worst. In most cases, the predicted SC is slightly higher than the observed SC, but these predictions are better for high than low SCs (Fig. 9).

Random errors caused by the initial conditions are typically about three times greater than systematic errors (Table 5). The SC RMSEs slightly increase with increasing wind speed RMSEs (not shown). The absolute differences between the predicted and observed SCs decrease with decreasing absolute differences between the predicted and observed wind speeds (Figs. 6 and 8); that is, improvements in wind forecasts provide great potential for improving SC predictions.

WRF captures the general trend of ERC acceptably well. It strongly underestimates ERC between 10 and 14 June but overestimates ERC from 24 June onward (Fig. 8). The ERC peaks observed for 14 and 23 June are predicted correctly with a 1-day delay. For 14 June, a failure to predict precipitation (Fig. 2) plays a role. For 23 June, a general delay in predicting the synoptic development is the cause. The daily average relative humidity, for instance, increases from 23 to 24 June, while WRF predicts a decrease.

Generally, for most sites and days, the predicted and observed ERCs are highly correlated and the RMSEs are low. The 24-h forecast lead time tends to provide higher ERC values than do all other forecast lead times. For most days, the ERC based on the 24-h forecast lead time correlates better with the observed ERC and has a lower RMSE than the ERC calculated from the ensemble or individual forecast lead times (Table 5). Errors in ERC increase with increasing errors in relative humidity (Figs. 5 and 8). Underestimation of ERC correlates with overestimation of the relative humidity, and vice versa. The discrepancy between the predicted and observed ERCs is appreciably greater for high (ERC > 15) rather than moderate or low fire risk (Fig. 9).

WRF predicts four of the seven IC peaks with a 1-day delay (Fig. 8). The 24-h forecast lead provides noticeably higher ICs and better correlation with the observed ICs on a day-by-day basis and overall than do all other forecast lead times. The IC predictions are excellent for the first third of June, strongly underestimated for the second third of June, and overestimated for the last third of June. This behavior pattern explains the scatter in Fig. 9. RMSEs are relatively high, but predicted and observed ICs correlate acceptably on a day-by-day and site-by-site basis (Table 5). Despite errors in wind and relative humidity, the predicted ICs are valuable for general trend assessment.

Predictions of BI are excellent for the first 12 days (Fig. 8). Overestimated precipitation and a predicted decrease of wind speed, while an increase was observed (Figs. 2 and 6), explain why the BI peak observed on 13 June was not predicted (Fig. 8). The failure to predict the BI decrease on 16 June results from errors in the precipitation forecasts on 14 and 15 June. From 18 June onward, BI is overestimated by approximately the same amount. Consequently, the temporal evolution is captured well. For BI values <20 (low) or >60 (very high fire risk), the discrepancies between the predicted and observed BIs are smaller than for the BI values between these values (Fig. 9).

The higher BI SDE than BI bias (Table 5) indicate that random errors from the initial and boundary conditions of the wind speed and relative humidity are the main reasons for these discrepancies. The fact that the 24-h forecast lead time provides the best results most of the time suggests the lateral boundary conditions are a main contributor. The BI RMSEs increase with the increasing RMSEs of the relative humidity and wind speed. Despite the observation that on average, RMSEs are extremely high, the predicted and observed BIs correlate strongly (Table 5). Thus, the predicted BIs are usable for general trend assessment.

d. Fires

In June 2005, 82 fires ignited in the area covered by the model domain (Fig. 1). Most of them ignited in coniferous forest. Since high fire risk does not mean that a fire will occur, actual fire ignition can only be used for indirect evaluation. In considering actual fire ignitions, possible differences between the fire-weather variables at the fire site, the site of relevant available weather observations, and the site of the relevant model grid point are further difficulties introduced into the evaluation endeavor. Typically, no weather observations exist at fire sites, until the fire becomes relatively large. The size of interior Alaska and its sparse population exacerbates the difficulty of evaluation.

Currently, there are only 29 sites for all fire risk assessments. Whether WRF-derived fire indices add value to the regional fire risk assessment is evaluated as follows: Predicted fire indices at ignition times and sites are compared with the 29-site-average fire index at ignition time. If these WRF-predicted fire indices exceed the 29-sites-average fire index, they are accounted for as additional valuable information.

At the ignition sites, most predicted Fosberg-type indices indicate low fire risk, while the 29 sites suggest moderate risk (>3), except for two cases with high risk (Fig. 10). Based on the above outlined evaluation strategy, Fosberg-type indices add no value in fire risk assessment for interior Alaska.

Fig. 10.

Fire indices predicted at the 82 locations within the model domain where fires ignited in interior AK in June 2005 for the day of ignition vs the average fire index over the 29 observational sites on the respective day for the (a) mFFWI, (b) SC, (c) ERC, (d) IC, and (e) BI. Dots and diamonds indicate lightning and human-caused fires, respectively. The one-to-one line is superimposed. Other Fosberg-type indices show similar behavior as mFFWI (therefore not shown). Note that high fire indices do not mean that a fire will ignite; they only indicate fire risk/fire behavior. (f) Number of fires ignited per day within the area covered by the model domain. (Observational data on fire activity are available online (http://www.dnr.state.ak.us/forestry/firestats/)

Fig. 10.

Fire indices predicted at the 82 locations within the model domain where fires ignited in interior AK in June 2005 for the day of ignition vs the average fire index over the 29 observational sites on the respective day for the (a) mFFWI, (b) SC, (c) ERC, (d) IC, and (e) BI. Dots and diamonds indicate lightning and human-caused fires, respectively. The one-to-one line is superimposed. Other Fosberg-type indices show similar behavior as mFFWI (therefore not shown). Note that high fire indices do not mean that a fire will ignite; they only indicate fire risk/fire behavior. (f) Number of fires ignited per day within the area covered by the model domain. (Observational data on fire activity are available online (http://www.dnr.state.ak.us/forestry/firestats/)

For most fires, the predicted SC, IC, BI, and ERC exceed the 29-site averages (Fig. 10); that is, the WRF-based NFDRS indices add information in this region of sparse observations. Investigation shows that some fire sites with lower NFDRS indices than the 29-site average fall in a grid cell with a WRF land cover type that is not considered by the NFDRS or that is classified as “water.” In some cases, WRF assumes a different land-cover type in the grid cell of the fire than the fuel that ignited (e.g., deciduous forest, while the fire started in black spruce).

Comparison of the temporal evolution of fire indices with the number of fires ignited per day (Figs. 8 and 10) shows that at the same time when many fires ignited, WRF-derived fire indices are high. Toward the end of June, days with no new fires ignited are those for which a reduction in fire risk is predicted by some (e.g., 25 June) or all (e.g., 19 June) NFDRS indices. Days with a plateau in some of the fire indices correspond well with those with a plateau in the number of fire ignitions.

6. Conclusions

A suite of 30, daily started, 5-day simulations by WRF is used to assess WRF’s feasibility for fire-weather prediction in interior Alaska for June 2005. Simulated state variables, and fluxes and fire indices determined from the simulated data, are evaluated by comparing them with observations and fire indices derived from the observed meteorological data. WRF predicts fire-weather conditions well for the various forecast leads and on ensemble average. The errors are within the range of observational uncertainty and depend only slightly on the forecast lead time for most quantities and indices. The 24-h forecast lead time, for instance, is best for predicting the time when the BI peak will occur.

WRF predicts shortwave downward radiation well during wet conditions, and acceptably during dry episodes. Errors in predicting shortwave downward radiation partly result from the initial and boundary conditions (zero cloud and precipitation mixing ratios). Discrepancies between the predicted and actual insolation values cause errors in the temperature and relative humidity by the partitioning of the available net radiation into sensible, latent, and ground heat fluxes. Consequently, WRF underestimates the diurnal amplitude of the relative humidity and temperature on average. However, it does capture the long-term temporal evolution of the daily average temperature and relative humidity well. Consequently, one has to conclude that fire indices based on daily averages will be more reliable than those based on extremes until the prediction of the extremes is improved. Therefore, WRF-derived fire indices based on daily averages should be preferred in decision making. Inclusion of a fuel model in determining Fosberg-type fire indices improves fire danger forecasts. Nevertheless, indirect evaluation of observed fires suggests that predicted Fosberg-type fire indices provide no additional value for decision making in this case study.

Incorrectly forecasted meteorological quantities lead to error propagation in any fire index calculated from them. Errors in predicted precipitation directly (e.g., KBDI) and indirectly (e.g., indices depending on relative humidity) cause fire indices uncertainty depending on the forecast lead time and the magnitude of the discrepancy. Nevertheless, WRF data permit us to capture the temporal evolution of KBDI, fuel moisture, and temperature in interior Alaska. The various fire indices differ in their sensitivities to meteorological and fuel conditions; consequently, failure to capture peaks occurs at different times for the various indices. The peaks of the predicted fire indices are occasionally delayed by 1 day. The SC is most reliable with respect to trends and predicting the times of peaks and minima. Despite error propagation that occurs in any accumulated quantity, the temporally continuous and spatially high-resolved suite of predicted fire indices may be more helpful than an occasional, single local value determined from observations at a site that may not even be representative of its adjacent area. However, any decision making should be based on an evaluation of a combination of all indices.

Indirect evaluation by means of 82 fires ignited in June 2005 within the area covered by the model domain shows that predicted NFDRS indices may add value to fire risk assessment in a region of sparse data. The skill scores of NFDRS indices indicate different levels of reliability for the various fire risk categories because of the different sensitivities of the fire indices to errors in WRF-predicted meteorological quantities (error propagation). Comparison of the number of fires ignited per day with the temporal evolution of the fire indices indicates that WRF-derived fire indices may have very good skill in “capturing” the observed variability of the fire activity.

WRF predicts the spatial distribution of precipitation acceptably for all forecast lead times. It captures the temporal precipitation evolution well, but underestimates precipitation. WRF successfully predicts precipitation of ≥7.5 mm day−1, which in interior Alaska is considered to be a critical threshold at or above which the fire danger will be reduced for several days. After eliminating “correct forecasts” due to random chance, high fire risk is better forecasted than is its reduction by precipitation.

This case study shows that WRF is able 1) to predict fire weather and fire danger reductions, and 2) to provide spatial and temporal distributions to calculate fire indices for interior Alaska for June 2005. Since for many meteorological quantities and fire indices the random errors from the boundary conditions contribute appreciably to errors, the model domain size should be as large as possible at still acceptable turnaround times. In any case, the domain size should be larger than the region of interest.

Acknowledgments

I thank G. Kramm and the anonymous reviewers for fruitful discussions, M. Shulski for access to the observations, C. O’Connor for editing, ARSC for computational support, and NSF for support under OPP-0327664 and ARC0652838.

REFERENCES

REFERENCES
Alaska Climate Center
,
cited
.
2007
:
The Alaska Climate Research Center. [Available online at http://climate.gi.alaska.edu/.]
.
Anthes
,
R. A.
,
1983
:
Regional models of the atmosphere in middle latitudes.
Mon. Wea. Rev.
,
111
,
1306
1335
.
Anthes
,
R. A.
,
Y. H.
Kuo
,
E. Y.
Hsie
,
S.
Low-Nam
, and
T. W.
Bettge
,
1989
:
Estimation of skill and uncertainty in regional numerical models.
Quart. J. Roy. Meteor. Soc.
,
115
,
763
806
.
Berdeklis
,
P.
, and
R.
List
,
2001
:
The ice crystal–graupel collision charging mechanism of thunderstorm electrification.
J. Atmos. Sci.
,
58
,
2751
2770
.
Boles
,
S. H.
, and
D. L.
Verbyla
,
2000
:
Comparison of three AVHRR-based fire detection algorithms for interior Alaska.
Remote Sens. Environ.
,
72
,
1
16
.
Burgan
,
R. E.
,
1988
:
Revisions to the 1978 National Fire-danger Rating System. Southeast Forest Experiment Station Research Paper SE-273, USDA Forest Service, Macon, GA, 39 pp
.
Carlson
,
J. D.
, and
R. E.
Burgan
,
2003
:
Review of users’ needs in operational fire-danger estimation: The Oklahoma example.
Int. Remote Sens.
,
24
,
1601
1620
.
Cheng
,
W. Y. Y.
, and
W. J.
Steenburgh
,
2005
:
Evaluation of surface sensible weather forecasts by the WRF and the Eta Models over the western United States.
Wea. Forecasting
,
20
,
812
821
.
Cohen
,
J. D.
, and
J. E.
Deeming
,
1985
:
The National Fire-Danger Rating System: Basic equations. General Tech. Rep. PSW-82, Pacific Southwest Forest and Range Experiment Station, Berkley, CA, 17 pp
.
Crook
,
N. A.
,
1996
:
Sensitivity of moist convection forced by boundary layer processes to low-level thermodynamic fields.
Mon. Wea. Rev.
,
124
,
1767
1785
.
Davis
,
C.
,
B.
Brown
, and
R.
Bullock
,
2006
:
Object-based verification of precipitation forecasts. Part I: Methodology and application to mesoscale rain areas.
Mon. Wea. Rev.
,
134
,
1772
1784
.
Deeming
,
J. E.
,
R. E.
Burgan
, and
J. D.
Cohen
,
1977
:
The National Fire-Danger Rating System—1978. Intermountain Forest and Range Experiment Station General Tech. Rep. INT-39, USDA Forest Service, Ogden, UT, 63 pp
.
Done
,
J.
,
C. A.
Davis
, and
M.
Weisman
,
2004
:
The next generation of NWP: Explicit forecasts of convection using the Weather Research and Forecasting (WRF) model.
Atmos. Sci. Lett.
,
5
,
110
117
.
Dudhia
,
J.
,
1989
:
Numerical study of convection observed during the Winter Monsoon Experiment using a mesoscale two-dimensional model.
J. Atmos. Sci.
,
46
,
3077
3107
.
Fehr
,
T.
,
N.
Dotzek
, and
H.
Hoeller
,
2003
:
Comparison of lightning activity and radar-retrieved microphysical properties in EULINOX storms.
Atmos. Res.
,
76
,
167
189
.
Fosberg
,
M. A.
,
1978
:
Weather in wildland fire management: The fire weather index. Proc. Conf. on Sierra Nevada Meteorology, South Lake Tahoe, CA, Amer. Meteor. Soc., 1–4
.
Goodrick
,
S. L.
,
2002
:
Modification of the Fosberg fire weather index to include drought.
Int. J. Wildfire
,
11
,
205
211
.
Grell
,
G. A.
,
S. E.
Peckham
,
R.
Schmitz
,
S. A.
McKeen
,
G.
Frost
,
W. C.
Skamarock
, and
B.
Eder
,
2005
:
Fully coupled “online” chemistry within the WRF model.
Atmos. Environ.
,
39
,
6957
6975
.
Hanna
,
S. R.
,
1994
:
Mesoscale meteorological model evaluation techniques with emphasis on needs of air quality models.
Mesoscale Modeling of the Atmosphere, Meteor. Monogr., No. 47, Amer. Meteor. Soc., 47–58
.
Hess
,
J. C.
,
C. A.
Scott
,
G. L.
Hufford
, and
M. D.
Fleming
,
2001
:
El Niño and its impact on fire weather conditions in Alaska.
J. Wildland Fire
,
10
,
1
13
.
Hoadley
,
J. L.
,
K.
Westrick
,
S. A.
Ferguson
,
S. L.
Goodrick
,
L.
Bradshaw
, and
P.
Werth
,
2004
:
The effect of model resolution in predicting meteorological parameters used in fire-danger rating.
J. Appl. Meteor.
,
43
,
1333
1347
.
Hoadley
,
J. L.
,
M. L.
Rorig
,
L.
Bradshaw
,
S. A.
Ferguson
,
K. J.
Westrick
,
S. L.
Goodrick
, and
P.
Werth
,
2006
:
Evaluation of MM5 model resolution when applied to prediction of national fire-danger rating indexes.
Int. J. Wildland Fire
,
15
,
147
154
.
Houze
,
R. A.
,
1993
:
Cloud Dynamics.
Academic Press, 573 pp
.
Hufford
,
G. L.
,
H. L.
Kelley
,
W.
Sparkman
, and
R. K.
Moore
,
1998
:
Use of real-time multisatellite and radar data to support forest fire management.
Wea. Forecasting
,
13
,
592
605
.
Kain
,
J. S.
,
S. J.
Weiss
,
J. J.
Levit
,
M. E.
Baldwin
, and
D. R.
Brigh
,
2006
:
Examination of convection-allowing configurations of the WRF model for the prediction of severe convective weather: The SPC/NSSL Spring Program 2004.
Wea. Forecasting
,
21
,
167
181
.
Keetch
,
J. J.
, and
G. M.
Byram
,
1968
:
A drought index for forest fire control. Research Paper SE-38.
U.S. Dept. of Agriculture, Ashville, NC, 35 pp
.
Klemp
,
J. B.
, and
W. C.
Skamarock
,
2004
:
Model numerics for convective-storm simulation.
Atmospheric Turbulence and Mesoscale Meteorology, E. Fedorovich, R. Rotunno, and B. Stevens Eds., Cambridge University Press, 117–137
.
Klemp
,
J. B.
,
W. C.
Skamarock
, and
J.
Dudhia
,
2007
:
Conservative split-explicit time integration methods for the compressible nonhydrostatic equations.
Mon. Wea. Rev.
,
135
,
2897
2913
.
Knievel
,
J. C.
,
D. A.
Ahijevych
, and
K. W.
Manning
,
2004
:
Using temporal modes of rainfall to evaluate the performance of a numerical weather prediction model.
Mon. Wea. Rev.
,
132
,
2995
3009
.
Knight
,
B. I.
,
F.
Ortiz
, and
R.
McClure
,
2005a
:
Alaska snow survey report—April 2005. Natural Resources Conservation Service, 34 pp
.
Knight
,
B. I.
,
F.
Ortiz
, and
R.
McClure
,
2005b
:
Alaska snow survey report—May 2005. Natural Resources Conservation Service, 34 pp
.
Kusaka
,
H.
,
A.
Crook
,
J.
Dudhia
, and
K.
Wada
,
2005
:
Comparison of the WRF and MM5 models for simulation of heavy rainfall along the Baiu front.
Sci. Online Lett. Atmos.
,
1
,
177
180
.
Lynch
,
J. A.
,
J. S.
Clark
,
N. H.
Bigelow
,
M. E.
Edwards
, and
B. P.
Finney
,
2003
:
Geographic and temporal variations in fire history in boreal ecosystems of Alaska.
J. Geophys. Res.
,
108
.
8D152, doi:10.1029/2001JD000332
.
McGuiney
,
E.
,
M.
Shulski
, and
G.
Wendler
,
2005
:
Alaska lightning climatology and application to wildfire science. Preprints, Conf. on Meteorological Applications of Lightning Data, San Diego, CA, Amer. Meteor. Soc., 2.14. [Available online at http://ams.confex.com/ams/pdfpapers/85059.pdf.]
.
Michalakes
,
J.
,
S.
Chen
,
J.
Dudhia
,
L.
Hart
,
J.
Klemp
,
J.
Middlecoff
, and
W.
Skamarock
,
2001
:
Development of a next generation regional weather research and forecast model.
Developments in Teracomputing: Proceedings of the Ninth ECMWF Workshop on the Use of High Performance Computing in Meteorology, W. Zwieflhofer and N. Kreitz, Eds., World Scientific, Singapore, 269-276
.
Michalakes
,
J.
,
J.
Dudhia
,
D.
Gill
,
T.
Henderson
,
J.
Klemp
,
W.
Skamarock
, and
W.
Wang
,
2004
:
The Weather Research and Forecast Model: Software architecture and performance.
Proc. 11th Workshop on the Use of High Performance Computing in Meteorology, Reading, United Kingdom, ECMWF, 13 pp. [Available online at http://www.wrf-model.org/wrfadmin/docs/ecmwf_2004.pdf.]
.
Mlawer
,
E. J.
,
S. J.
Taubman
,
P. D.
Brown
,
M. J.
Iacono
, and
S. A.
Clough
,
1997
:
Radiative transfer for inhomogeneous atmospheres: RRTM, a validated correlated-k model for the longwave.
J. Geophys. Res.
,
102D
,
16663
16682
.
Mölders
,
N.
, and
V. E.
Romanovsky
,
2006
:
Long-term evaluation of the Hydro-Thermodynamic Soil-Vegetation Scheme’s frozen ground/permafrost component using observations at Barrow, Alaska.
J. Geophys. Res.
,
111
.
D04105, doi:10.1029/2005JD005957
.
Mölders
,
N.
, and
G.
Kramm
,
2007
:
Influence of wildfire induced land-cover changes on clouds and precipitation in interior Alaska—A case study.
Atmos. Res.
,
84
,
142
168
.
Mölders
,
N.
,
M.
Jankov
, and
G.
Kramm
,
2005
:
Application of Gaussian error propagation principles for theoretical assessment of model uncertainty in simulated soil processes caused by thermal and hydraulic parameters.
J. Hydrometeor.
,
6
,
1045
1062
.
National Wildfire Coordinating Group
,
2002
:
Gaining a basic understanding of the National Fire Danger Rating System—A self-study reading course. National Wildfire Coordinating Group, 73 pp. [Available online at http://www.nationalfiretraining.net/ca/nctc/prework/nfdrs_pre_study.pdf.]
.
Roads
,
J.
,
F.
Fujioka
,
S.
Chen
, and
R.
Burgan
,
2005
:
Seasonal fire-danger forecasts for the USA.
Int. J. Wildland Fire
,
14
,
1
18
.
Skamarock
,
W. C.
,
J. B.
Klemp
,
J.
Dudhia
,
D. O.
Gill
,
D. M.
Baker
,
W.
Wang
, and
J. G.
Powers
,
2005
:
A description of the Advanced Research WRF version 2. NCAR Tech. Note NCAR/TN-468+STR, 88 pp
.
Smirnova
,
T. G.
,
J. M.
Brown
, and
S. G.
Benjamin
,
1997
:
Performance of different soil model configurations in simulating ground surface temperature and surface fluxes.
Mon. Wea. Rev.
,
125
,
1870
1884
.
Smironova
,
T. G.
,
J. M.
Brown
,
S. G.
Benjamin
, and
D.
Kim
,
2000
:
Parameterization of cold season processes in the MAPS land-surface scheme.
J. Geophys. Res.
,
105D
,
4077
4086
.
Speer
,
M. S.
,
L. M.
Leslie
,
J. R.
Colquhoun
, and
E.
Mitchell
,
1996
:
The Sydney Australia wildfires of January 1994—Meteorological conditions and high resolution numerical modeling experiments.
Int. J. Wildland Fire
,
6
,
145
154
.
Stewart
,
J. B.
,
1977
:
Evaporation from the wet canopy of a pine forest.
Water Resour. Res.
,
13
,
915
921
.
Stocks
,
B. J.
,
M. A.
Fosberg
,
T. J.
Lynham
,
L.
Means
,
B. M.
Wotton
, and
Q.
Yang
,
1998
:
Climate change and forest fire potential in Russian and Canadian boreal forests.
Climatic Change
,
38
,
1
13
.
Stocks
,
B. J.
,
M. A.
Fosberg
,
M. B.
Wotten
,
T. J.
Lynham
, and
K. C.
Ryan
,
2000
:
Climate change and forest fire activity in North American boreal forests.
Fire, Climate Change, and Carbon Cycling in North American Boreal Forest, E. S. Kasischke and B. J. Stocks, Eds., Springer, 368–376
.
Thompson
,
G.
,
R. M.
Rasmussen
, and
K.
Manning
,
2004
:
Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part I: Description and sensitivity analysis.
Mon. Wea. Rev.
,
132
,
519
542
.
Vidal
,
A.
,
F.
Pinglo
,
H.
Durand
,
C.
Devaux-Ros
, and
A.
Maillet
,
1994
:
Evaluation of a temporal fire-risk index in Mediterranean forests from NOAA thermal IR.
Remote Sens. Environ.
,
49
,
296
303
.
von Storch
,
H.
, and
F. W.
Zwiers
,
1999
:
Statistical Analysis in Climate Research. Cambridge University Press, 484 pp
.
Westerling
,
A. L.
,
A.
Gershunov
,
T. J.
Brown
,
D. R.
Cayan
, and
M. D.
Dettinger
,
2003
:
Climate and wildfire in the western United States.
Bull. Amer. Meteor. Soc.
,
84
,
595
604
.
Wicker
,
L. J.
, and
W. C.
Skamarock
,
2002
:
Time-splitting methods for elastic models using forward time schemes.
Mon. Wea. Rev.
,
130
,
2088
2097
.
Wilks
,
D. S.
,
1995
:
Statistical Methods in Atmospheric Sciences.
Academic Press, 467 pp
.
World Meteorological Organization
,
1974
:
Guide to Hydrometeorological Practices.
3rd ed. WMO Tech. Rep. 82, Geneva, Switzerland, 123 pp
.
Zhong
,
S.
, and
J.
Fast
,
2003
:
An evaluation of the MM5, RAMS, and meso-Eta models at subkilometer resolution using VTMX field campaign data in the Salt Lake valley.
Mon. Wea. Rev.
,
131
,
1301
1322
.
Zhong
,
S.
,
H. J.
In
,
X.
Bian
,
J.
Charney
,
W.
Heilman
, and
B.
Potter
,
2005
:
Evaluation of real-time high-resolution MM5 predictions over the Great Lakes region.
Wea. Forecasting
,
20
,
63
81
.

Footnotes

Corresponding author address: Nicole Mölders, Dept. of Atmospheric Sciences, Geophysical Institute and College of Natural Science and Mathematics, University of Alaska Fairbanks, P.O. Box 757320, 903 Koyukuk Dr., Fairbanks, AK 99775-7320. Email: molders@gi.alaska.edu