Abstract

The decay of tropical cyclones after landfall is a key factor in estimating the extent of the hazard overland. Yet our current understanding of this decay is challenged by the low frequency of past events. Consequently, one cannot rely solely upon the historical record when attempting to quantify robustly the inland penetration of tropical cyclones. Thus, a framework designed to complement the historical record of landfalling storms by means of numerical modeling is introduced. Historical meteorological situations that could potentially have led to a landfall on the coast of the Gulf of Mexico are targeted and, using a bogus vortex technique in conjunction with a mesoscale model, a large number of landfalling hurricanes are simulated.

The numerical ensemble constitutes a more comprehensive sample of possible landfalling hurricanes: it encompasses the range of events observed in the past but is not constrained to it. This allows us to revisit existing statistical models of the decay of tropical cyclones after landfall. A range of statistical models trained on the numerical ensemble of storms are evaluated on their ability to reproduce the inland decay of historical storms. These models have more skill at predicting tropical cyclone intensity over land than similar models trained exclusively on historical data.

1. Introduction

Understanding the decay of tropical cyclones (TCs) after landfall is of critical importance in estimating the penetration of wind hazards inland. Hurricane practitioners model the filling rate of TCs for real-time forecasting, emergency preparedness, or estimating hazard return periods to design building codes and refine insurance premiums.

The primary energy source of the thermally driven circulation of tropical cyclones is constituted by the sensible and latent heat fluxes over the warm oceanic tropical waters. Once the storm has made landfall, these fluxes are greatly reduced. Above land the friction also favors dissipation even if it has been shown to be a secondary effect in the storm decay (Miller 1963; Tuleya 1994).

Although the rationale behind tropical cyclone decay after landfall is well understood, the dynamical processes involved are multiple, nonlinear, highly variable in time and space, and interdependant. It is therefore difficult to propose a satisfactory theoretical and analytical quantitative description of the evolution of the pressure of a storm after it makes landfall. That is why previous studies have built on the use of empirical models. The successive refinements of this class of model took advantage of the increasing amount of reliable data to include more predictors and better understand the dynamical processes involved.

After the early work of Malkin (1959), Schwerdt et al. (1979) introduced the first breakthrough in modeling the decay of landfalling tropical cyclones by showing that the pressure decay depends on the region where the landfall occurs. Batts et al. (1980) proposed a model where pressure decays with time after landfall, the decay rate being constant for each storm and a function of the angle with respect to the coastline at which the storm makes landfall. Georgiou (1985) investigated a formulation of the central pressure that changes with distance to the coast depending on the region of landfall. Ho et al. (1987) showed that the decay rate is a function of the intensity at landfall and Vickery and Twisdale (1995) formalized this intensity dependence by objectively fitting regressions of the filling rate to the central pressure. Kaplan and DeMaria (1995) proposed a model for the decrease of wind speeds rather than the central pressure increase. However, the architecture of their model is similar in the sense that it accounts for intensity at landfall and distance to the coast. They refined their model for storms making landfall in New England (Kaplan and DeMaria 2001) and a specific representation of landfall over narrow landmasses was introduced by DeMaria et al. (2006). The latest framework for an explicit formulation of the pressure decay after landfall was introduced by Vickery (2005) with a filling rate that accounts for the intensity, the size, and the translational speed of the storm at landfall.

The present paper also investigates empirical models for the pressure filling after a tropical cyclone makes landfall. We propose complementing existing studies by taking advantage of recent advances in numerical weather prediction (NWP) modeling to enhance the empirical decay models. NWP models can be used to forecast or hindcast historical storms. Tropical cyclone forecasting with a comprehensive dynamical model is challenging because of model limitations (resolution, physical parameterization, and initialization) that make it diverge from the actual realization of the storm. Using such a model to hindcast past storms is of limited interest to train an empirical model as it does not extend the observational record. However, NWP models can simulate storms that are physically consistent even though they did not occur in the physical world.

We develop a methodology that consists of embedding a bogus vortex in an existing meteorological situation exempt from a tropical cyclone although favorable to a landfall on the U.S. coastline of the Gulf of Mexico (this region being chosen to narrow the scope of the study). Using such modified initial conditions to initialize an NWP model, we generate a large ensemble of “new” landfalling tropical cyclones. Section 2 focuses on the description of this methodology and the comparison of the numerical ensemble of TCs against the historical record.

The artificial landfalling tropical cyclones are then used to train statistical models of the decay after landfall in section 3. It cannot be assumed that the ensemble of modeled storms is flawless, particularly as the goal of generating a large ensemble of artificial storms could not be achieved without using a relatively inexpensive model setup. Thus, we devote sections 2c and 3b to the validation of the ensemble. In addition, the statistical models trained with the numerical ensemble are systematically tested against the historical data to ensure that they can simulate historical events, and their performance is compared against existing empirical models. The increased size of the learning dataset allows us to investigate the role of additional predictors, hence giving insight into the dynamical processes involved.

2. Numerical ensemble of landfalling storms

a. Methodology

1) Existing bogusing techniques

To generate “new” tropical cyclones with a mesoscale model, we make use of a technique called bogusing, which consists of adding a synthetic vortex in an existing gridded reanalysis. This technique is often used to sharpen the representation of tropical cyclones in coarse reanalyses in order to, for example, improve the initialization of a forecast.

The main issues when merging a synthetic vortex with the background field are 1) removing any preexisting storm in the analysis and 2) ensuring that the resulting merged vortex field is balanced. Examples of bogusing include those of Kurihara et al. (1993) and Low-Nam and Davis (2001), in which a vortex is merged into the final analysis, and Goerss and Jeffries (1994) and Xiao et al. (2006), where synthetic observations from a bogus vortex are added in the data assimilation step.

2) Our approach

We aim to simulate new events and we thus target meteorological situations where no TC is present in the first-guess analysis. We can therefore ignore the issue of filtering and removing existing storms. In addition, unlike the aforementioned studies, we are not concerned with the forecasting skill of the simulation and thus give less importance to limiting the perturbation of the background flow.

The wind profile of the synthetic, axisymmetric bogus is that of a modified Rankine vortex, with a linear increase of tangential wind up to the radius of maximum winds (RMW), and a power-law decrease beyond the RMW out to 400 km from the center, beyond which the wind decreases sharply. The maximum wind as a function of height is reached at 1 km of altitude, with a Gaussian decay above and below this level.

The initial wind field is not, physically speaking, an accurate approximation to a developed tropical cyclone, especially in regard to its vertical structure. However, the introduction of a sufficiently strong baroclinic vortex leads to the rapid formation of a tropical cyclone with realistic radial and vertical structures in the NWP model.

The approximate hydrostatic and gradient wind balance of the initial vortex wind field is computed by the iterative procedure presented by Nolan et al. (2001). A first-guess pressure field is derived from the hydrostatic balance of the average surrounding vertical sounding. This pressure field is corrected by integrating the gradient wind balance equation inward from a large radius using a second-order Crank–Nicholson technique. The temperature field is then corrected by solving the hydrostatic equation at each point for the temperature, given the new pressure field. These two corrections are repeated several times until convergence is achieved, usually in just a few iterations.

In addition, the specific humidity in the inner core of the vortex is increased by up to 20% at the location of the maximum winds to enhance the convection that drives the transformation of the initial vortex into a realistic tropical cyclone.

The zonal and meridional components of the synthetic wind field are then added to the existing flow in the background analysis. Note that the synthetic vortex remains axisymmetric up to that stage. The only factor of asymmetry results from the linear combination of the purely azimuthal flow in the synthetic vortex and the background steering flow. For temperature, specific humidity, geopotential, and total mass of dry air in the column, only the deviations from the far field contained in the synthetic vortex are added to the background field.

3) Implementation

The Advanced Research version of the mesoscale Weather Research and Forecasting model (WRF ARW, version 2.2; Skamarock et al. 2005) is used to simulate the evolution of the vortices generated with the aforementioned procedure. The forecast skill of the WRF model in representing hurricanes has been investigated by Davis et al. (2008), and Zhu (2007) discusses its ability to reproduce boundary layer processes at landfall. Nolan et al. (2009a,b) found that the boundary layer and surface wind fields in high-resolution simulations of Hurricane Isabel (2003) compared remarkably well to a variety of in situ observations of the boundary layer in the same storm. However, considering our goal to simulate several hundred storms, we chose a relatively inexpensive model setup. Therefore, we cannot rely solely on their results and will validate our results against observations in sections 2c and 3b.

The geometric configuration includes two meshes of 36- and 12-km resolution, respectively. The mother domain is centered on the geographic point (28°N, 88°W) with 65 × 65 points and a Lambert projection, thus covering approximately the geographic box delimited by the latitudinal bands 17° and 37°N and longitudes 78° and 98°W. The inner mesh has 49 × 49 points and moves with the vortex by tracking the minimum geopotential height at 900 hPa every 15 min within a search radius of about 90 km (hence setting the upper limit of the translational speed of the vortex to 30 m s−1). The vertical grid includes 28 terrain-following hydrostatic-pressure vertical levels up to 50 hPa with about 9 levels below 800 hPa and 23 below 150 hPa.

Turbulence in the planetary boundary layer follows the Yonsei University scheme. Convection is computed using the Kain–Fritsch formulation. The microphysics follows the WRF single-moment three-class scheme. Surface friction velocities and exchange coefficients are based on Monin–Obukhov theory with standard similarity functions from lookup tables. The land surface model computes the temperature of five layers but does not include moisture. More details on these parameterizations can be found in Skamarock et al. (2005).

b. Generating an ensemble of landfalling storms

1) Targeting simulations of potential landfall

The bogusing technique described in section 2a is used to merge vortices of various intensities in historical meteorological situations. To generate a large ensemble of landfalling storms, we also need to target the simulations (i.e., to select the initial date and location where the synthetic vortex is to be merged in the analysis). We are interested in landfalling storms and thus use backward trajectories from the U.S. coastline to derive candidate situations. The rationale behind the choice of trajectory modeling builds on the assumption that if the beta effect (Holland 1983) and other impacts of the storm on its environment were neglected, the paths of tropical cyclones would approximately follow the mean steering flow. This average path is inferred from simulations performed with the kinematic trajectory model Flextra (Stohl and Seibert 1998) driven by the Final Operational Global Analysis of the National Centers for Environmental Prediction (NCEP) Global Forecast System at 1° spatial and 6-h temporal resolution available from the Research Data Archive, under dataset number ds083.2 (information online at http://dss.ucar.edu). Operational tropical cyclone track forecasting is sometimes performed using similar trajectory models, such as the Beta and Advection Model (Marks 1992), which include a parameterization of the beta drift of the storm. We included this parameterization in the Flextra model. However, we found that the spread between the forecasted and actual paths of the bogus storm was so considerable that we choose to limit our use of the kinematic trajectories to derive the approximate initial location of the bogus vortices so that the “artificial” storms are more likely to make landfall.

We limit the scope of this study to storms making landfall along the U.S. coastline of the Gulf of Mexico in the region near New Orleans. Hence 48-h-long backward-isobaric trajectories arriving at 700 hPa over New Orleans are computed for every day of the months of July–November of the years 2000–07. Trajectories originating from any landmass or outside of the geographical area covered by the numerical model domain are discarded from the following.

From the point reached by these trajectories 48 h before their arrival in New Orleans, we compute forward trajectories at the 850-hPa level. Situations of high shear are inferred by comparing the forward trajectories at 850 and 700 hPa, the latter being identical to the backward trajectory. When the location of the air parcel at both levels differs of more than 1000 km after 48 h, the situation is discarded. Selected trajectories are shown in Fig. 1. The significant divergence of paths at 700 and 850 hPa—even for these optimum situations—illustrates why we used a narrow metric for the shear instead of the classical difference across the whole troposphere. This is due to the magnifying effect of accumulated local shear along the Lagrangian trajectory.

Fig. 1.

Kinematic trajectories reaching New Orleans between 2000 and 2007 and selected as being favorable for a bogus experiment of landfall due to their origin over ocean and the limited divergence between the 850-hPa (gray) and 700-hPa (black) trajectories.

Fig. 1.

Kinematic trajectories reaching New Orleans between 2000 and 2007 and selected as being favorable for a bogus experiment of landfall due to their origin over ocean and the limited divergence between the 850-hPa (gray) and 700-hPa (black) trajectories.

Out of 1124 trajectories, 168 satisfy both the shear and the geographic criteria. The second criterion is the most stringent: without taking into account the shear, we would have considered 238 situations. The average annual number of selected situations is 27; however, the standard deviation is 11.5, illustrating the high interannual variability of situations potentially leading to a TC landfall.

2) Modified initial conditions

For each situation selected, several mesoscale bogus storms are simulated using WRF forced by the NCEP/GFS global reanalysis mentioned above altered by the addition of a bogus vortex at the desired location. The evolution of a tropical cyclone is driven by its initial state and its environment. In a numerical model, the choice of physical parameterization also influences the modeled storm (Braun and Tao 2000). Our goal being to generate a comprehensive ensemble of landfalling TCs rather than test the sensitivity of the result to the factors aforementioned, we decided to minimize the degrees of freedom of the experiment relying only on posterior validation of the spread of the simulations to infer whether the parameter space explored was broad enough.

All initial synthetic vortices have an initial maximum wind at 10 m (Vmax) of 30 m s−1. The decay of the wind profile beyond RMW follows a power-law decrease that scales with (r/RMW)0.75, r being the radial distance from the center of the storm. The initial RMW is 108 km (i.e., 3 times the horizontal resolution of the mother domain). For each selected trajectory we perform three experiments with the bogus located within 1° of the location diagnosed with the trajectory model. The physics parameterizations are held constant for all simulations.

These relatively inexpensive and conservative parameterizations tend to simulate too weak storms (see more quantitatively below in section 2c). Hence, we decided to include a perturbation of the sea surface temperature. For each of the three initial vortex locations, two simulations are performed: one with the observed SST and a second with the SST increased by 2 K throughout the domain. Note that using such an artificial offset is acceptable because of the specific scope of this study that intends only to simulate a comprehensive set of storms making landfall in terms of intensity.

3) Refinement of case selection

With a total of 168 potential situations of landfall in New Orleans and an ensemble of six members for each selected situation, a total of 1008 bogus storms could be simulated. However, we decided to further pre- and postprocess the selected cases to avoid unrealistic storms.

Unlike operational bogusing procedures, our approach does not remove any preexisting observed storms. Consequently, we may initialize the bogus in the vicinity of an existing storm leading to a merger of both vortices that would complicate subsequent interpretation of the results. Hence, initial situations with a low pressure system stronger than the bogus located within 400 km of the potential bogus center are discarded. In this study, we had to discard only three events out of 1008 for this reason.

The fact that we do not target situations where a historical storm occurred may also lead us to initialize bogus storms in a highly baroclinic environment. Such situations are avoided by using the cyclone phase space framework proposed by Hart (2003). The cyclone phase space’s thermal asymmetry parameter (B) is derived from the difference of right- and left-hand side (with respect to storm motion) geopotential thicknesses between 900 and 600 hPa within 500 km of the storm center. We used the mean steering flow to estimate the initial translational direction of the storm needed to identify its left- and right-hand sides. If the absolute value of B is greater than 10 m, the flow in the immediate surroundings of the storm can be considered nonbarotropic because of a too large gradient of geopotential. This criterion led us to discard 9% (88 events) of the potential initial situations.

Once we have identified the favorable conditions, we proceed to the bogus simulation. The only filtering of failing simulations consists of discarding the simulations where the tracking of the storm fails. The tracking of the TC is performed using WRF’s internal tracking described in section 2a. The main reason why this tracking may fail is because the storm gets closer to a more intense system. This can be inferred by monitoring whether the storm center is still located at the center of the grid (i.e., checking if the grid is not drifting toward the more intense neighboring storm). We stop the simulation when the storm is located 10 grid points (or 120 km) away from the center of the inner mesh.

This postprocessing led to the selection of 903 valid tracks. The NWP is integrated for 5 days and the first 12 h of each simulation are discarded. The average track length of successful simulations is 70.8 h and only 25% of the tracks are shorter than 48 h.

c. Validation of the bogus storms

The experiment is not designed to reproduce historical storms. However, it is important that the storms in the bogus ensemble are representative of tropical cyclones occurring in the Gulf of Mexico. Achieving a good coverage of the whole range of tropical cyclone types in this area was a prerequisite for the parameter space explored in the ensemble to be deemed satisfactory.

The results of the WRF simulations are validated against the Atlantic basin Hurricane Database (HURDAT), which includes the intensity and track position for storms between 1851 and 2007 (Jarvinen et al. 1984). Only historical storms observed in the Gulf of Mexico (geographical box delimited by 20°–35°N, 100°–78°W) are included in the comparison. At this stage we include all bogus tracks, without distinction as to whether or not the storm makes landfall. A specific validation of the storm characteristics at landfall is conducted in section 3.

1) Tracks

Figure 2 shows the track density in the bogus ensemble (Fig. 2a) and in HURDAT (Fig. 2b). The coverage of the region of New Orleans is much higher in the bogus ensemble than in the historical record with a track density more than an order of magnitude higher. As discussed in section 2b, the experiment was designed so that the storms would have arrived in New Orleans in 48 h if they were advected by the mean flow like a passive tracer. However, their actual path diverges significantly from the mean flow. This behavior is illustrated here, where many bogus storms do not reach New Orleans; on average, the distance between the bogus location 2 days after the initialization and the target is 356 km (standard deviation 193 km).

Fig. 2.

Track density (counts by 1° × 1° grid boxes) in (a) the numerical ensemble and (b) HURDAT (1851–2007). The white lines represent the tracks of landfalling storms.

Fig. 2.

Track density (counts by 1° × 1° grid boxes) in (a) the numerical ensemble and (b) HURDAT (1851–2007). The white lines represent the tracks of landfalling storms.

2) Intensity

The ranges of intensity of the storms in the numerical ensemble and the historical record are consistent. We chose to use central pressure as a measure of intensity rather than surface winds in order to avoid the scaling problems that arise when inferring wind at 10-m heights and 1-min averages from the NWP output in the particular context of a TC environment. The minimum and maximum sea level pressure in the HURDAT database for storms in the Gulf of Mexico are 882 and 1017 hPa, respectively. In the bogus ensemble the values range from 895.4 to 1017.9 hPa so that only two observations in the HURDAT record are not encompassed by the numerical ensemble. The distributions of storms by range of pressure are given in Table 1. All track points every 6 h are counted here rather than the minimum sea level pressures reached by individual storms. Note that although the experiment was not designed to match the distribution of the historical record, we achieved a good level of coverage for all categories.

Table 1.

Distribution of modeled and observed (HURDAT) storms by minimum central pressure range (%) in the Gulf of Mexico.

Distribution of modeled and observed (HURDAT) storms by minimum central pressure range (%) in the Gulf of Mexico.
Distribution of modeled and observed (HURDAT) storms by minimum central pressure range (%) in the Gulf of Mexico.

This range could not have been obtained without artificial offsets in the sea surface temperature as the most intense storm modeled with the reference SST has a central pressure of 921 hPa. The 2-K offset in SST mentioned in section 2b leads to an average decrease in sea level pressure of 12.3 hPa.

3) Intensification and decay

The rate of intensity change is also challenging to forecast with numerical models. Even if our purpose is not to reproduce the rapid intensification of specific historical storms, it is relevant to compare the intensification and the decay of bogus storms to the historical record to check if the modeled storms are realistic.

Figure 3 shows the distribution of the minimum and maximum intensity change (downgraded to a 24-h temporal resolution to match the National Weather Service definition of rapid deepening as a drop of at least 42 hPa in 24 h) for each storm. The numerical model overestimates the proportion of steady storms, with a significant peak in the distribution around zero. The distribution of observed intensity change is slightly narrower with a standard deviation of 23.2 hPa compared to 26.3 hPa for the numerical ensemble. However, the decay rate ranges are comparable. Figure 3 shows also that the model captures well rapid intensification events with 10.8% of the bogus track intensifying at a rate higher than 42 hPa day−1 compared to 8.6% in the observed record. However, the model is missing the two most extreme events: Rita in 2005 (−70 hPa in 24 h) and Wilma in 2005 (−97 hPa in 24 h).

Fig. 3.

Distribution of the minimum (negative values for intensification) and maximum (positive values for the decay) intensity change (hPa) within 24 h along each modeled (black) and observed (white) track.

Fig. 3.

Distribution of the minimum (negative values for intensification) and maximum (positive values for the decay) intensity change (hPa) within 24 h along each modeled (black) and observed (white) track.

The behavior of the pressure in the hours leading up to landfall is of particular relevance to our study. Figure 4 shows the intensity change in the 6-h preceding landfall for the historical and modeled storms. The distributions cover very similar ranges although the bogus storms underestimate the degree of intensification directly before landfall.

Fig. 4.

Distribution of the intensity changes (negative values for intensification) in the 6 h before landfall for the modeled (black) and observed (white) tracks.

Fig. 4.

Distribution of the intensity changes (negative values for intensification) in the 6 h before landfall for the modeled (black) and observed (white) tracks.

4) Size

The relatively coarse resolution of the inner mesh in our model setup (12 km) makes it difficult to reproduce very small storms. The HURDAT dataset does not include information on the storm size; thus, we used the extended best track (EBT) dataset, version 1.8, which covers the period 1988–2007 (Demuth et al. 2006). In this dataset, RMWs obtained mostly from the vortex messages of aircraft reconnaissance missions (when available) are attached to HURDAT’s best track. These radii of maximum winds in the Gulf of Mexico range from 9.25 to 564.25 km.

In the bogus ensemble, the RMW—measured as the distance between the minimum sea level pressure and the maximum wind at 10 m—ranges from 12 to 476.4 km. Thirty snapshots have a radius of maximum winds lower or equal to 24 km; however all of them have a central pressure of 979 hPa or higher. These small RMWs are thus induced by the detection of a second pressure minimum located some distance from the center in relatively weak storms whose inner cores are not very well defined. Computing RMW as the distance between the location of Vmax and that second pressure minimum can induce a spuriously low value for RMW.

As expected, the range of small RMWs is not captured by the model. This is clearly a limitation of using a coarse numerical model to investigate tropical cyclones. Table 2 provides the distribution of RMWs in the bogus ensemble and in EBT. Despite the fact that all the storms are initialized from a bogus vortex with an RMW of 108 km, there is still a wide range of RMW sizes.

Table 2.

Distribution of modeled and observed (EBT) storms by RMW (%) in the Gulf of Mexico.

Distribution of modeled and observed (EBT) storms by RMW (%) in the Gulf of Mexico.
Distribution of modeled and observed (EBT) storms by RMW (%) in the Gulf of Mexico.

3. Statistical models of the decay of tropical cyclones after landfall

In this section, the numerical ensemble of tropical cyclones is used to train empirical models of storm decay after landfall. We illustrate how the increased size of the training dataset allows us to improve upon the existing statistical models for pressure filling as proposed by Vickery and Twisdale (1995) and Vickery (2005).

a. Landfall characteristics of historical storms

To estimate the filling rate of any particular tropical cyclone, it is extremely important to have a precise knowledge of the time of and intensity at landfall. We therefore use the central pressure and time of landfall provided in the landfall summaries of the National Hurricane Center (NHC) Tropical Cyclone Reports (information online at www.nhc.noaa.gov/pastall.shtml) in conjunction with the 6-hourly best-track time series of central pressure.

As highlighted by Vickery (2005), the sizes and translational velocities of the storms at landfall are likely to play a role in their filling rates. Translational speed (Vt) at landfall is linearly interpolated at the landfall location from the best-track position of the storm. The RMW is obtained from the EBT (Demuth et al. 2006) when not available in the NHC Tropical Cyclone Reports, H*Wind (Powell et al. 1998), or in Vickery (2005). A higher priority is given to the latter sources of data compared to EBT because they include the RMW at the time of landfall, avoiding an interpolation from the 6-hourly EBT time series. We consider only post-1988 events because of the scarce radius information before that date. Also, when the nearest available RMW data are not within 6 h of the landfall, the RMW is marked as not available. Florence (1988) and Chantal (1989) were ultimately discarded because of the lack of information with regard to their size at landfall.

In this work, filling models are fitted and validated using both hurricanes and tropical storms. It is however difficult to obtain robust estimates of filling rates for very weak storms due to their sensitivity to environmental pressure, and therefore Helene (2000) and Bertha (2002) (with central pressures of 1006 and 1008 hPa at landfall, respectively) were discarded. In the coastal segment covered by the bogus ensemble (94.86°–84.87°W; see below), a total of 24 selected storms made landfall in the 1988–2007 time period. Their characteristics are summarized in Table 3.

Table 3.

Landfall characteristics of selected named storms making landfall in the area covered by the numerical ensemble.

Landfall characteristics of selected named storms making landfall in the area covered by the numerical ensemble.
Landfall characteristics of selected named storms making landfall in the area covered by the numerical ensemble.

b. Landfall characteristics of modeled storms

Since a single location has been targeted in the setup of the bogus ensemble, the coverage of the coastline of the Gulf of Mexico is incomplete as shown by the tracks of bogus and historical landfalls (Figs. 2a,b, respectively). The vast majority of the landfalling bogus storms are located along the northern part of the Gulf coast so that, for example, the coverage of the Gulf coasts of Texas and Florida are not dense enough to be compared to the historical dataset. Consequently, we discard storms making landfall on the edge of the domain: west of 94.86°W and east of 84.87°W. These limits (approximately within 5° west and east of New Orleans) were chosen to discard the outermost 5% of storms on both ends of the distribution. Once this longitudinal filtering is done, the southernmost bogus landfall occurs at 29.27°N, hence, also narrowing the latitudinal variability in the original dataset.

To ensure that each storm has enough observations to obtain a robust estimate of the filling rate, only storms that remain over land for at least 4 h are considered. For consistency with the historical dataset, modeled storms with a central pressure at landfall greater than 1005 hPa are discarded. A comparison of the intensity of the selected storms in the numerical ensemble and in the historical record is provided in Fig. 5. This histogram shows that the landfall intensities of the bogus events cover the range observed in the historical data and offer a more continuous intensity distribution. A scatterplot of size versus intensity at the time of landfall for both modeled and observed storms is given in Fig. 6, which shows that while the numerical ensemble does not capture smaller storms, the relationship between size and intensity is consistent across both datasets.

Fig. 5.

Distribution of intensity at landfall (central pressure in hPa) of selected storms making landfall in the vicinity of New Orleans in the numerical ensemble (black) and historical dataset (white).

Fig. 5.

Distribution of intensity at landfall (central pressure in hPa) of selected storms making landfall in the vicinity of New Orleans in the numerical ensemble (black) and historical dataset (white).

Fig. 6.

RMW (km) vs central pressure (hPa) for historical (squares) and bogus (stars) storms. Only landfall points—retained to train the empirical decay models—are shown.

Fig. 6.

RMW (km) vs central pressure (hPa) for historical (squares) and bogus (stars) storms. Only landfall points—retained to train the empirical decay models—are shown.

This filtering by longitude, time over land, and intensity at landfall reduces the size of the ensemble compared to the validation discussed in section 2c. In the following we consider 450 of the 566 bogus storms making landfall in the Gulf of Mexico.

c. Filling rate

Let P(i, t) be the minimum central pressure (hPa) of storm i, t hours after landfall. Let Penv(i, t) be some measure of environmental pressure corresponding to storm i, at time t. Vickery and Twisdale (1995) and Vickery (2005) propose

 
formula

where α(i) is the filling rate for storm i. We aim to build a statistical model relating α(i) to a vector of storm-specific predictors denoted x(i). It is noted that Vickery and Twisdale (1995) and Vickery (2005) take Penv(i, t) to be 1013 hPa, while in this work we attempt to account for some variability by using a month- and location-specific climatological mean of the environmental pressure from 1948 to 2006 obtained from NCEP–National Center for Atmospheric Research (NCAR) gridded reanalyses (Kalnay et al. 1996).

The filling rate α(i) is estimated separately for each storm i using nonlinear least squares and the resulting estimate is denoted (i). The (i) for the historical and modeled storms have means of 0.041 and 0.033 h−1, respectively, and standard deviations of 0.026 and 0.017 h−1, respectively (estimates are based on the first 48 h after landfall). Figure 7 provides a comparison of the distributions and shows the two sets of (i) to be similar: the Kolmogorov–Smirnov test returns a p value of 0.38 so the hypothesis of equivalence of distributions cannot be rejected. Figure 7 shows that six of the modeled storms intensify after landfall [(i) < 0]. Further inspection shows that only one of them undergoes a drop in pressure of more than 2 hPa over land: this storm intensifies by 8 hPa as it travels over the Mississippi Delta south of New Orleans. It is reasonable that a storm with this particular path might intensify slightly over land because of the sustained sensible and latent heat fluxes. There are examples of such patterns of behavior in historical storms that make landfall in the Florida Peninsula while remaining close to the coast (e.g., Irene, 1999). A storm can also intensify overland as a result of extratropical transition and/or if it merges with another low pressure system such as Ike (2008). These are however very specific cases: allowing negative in a filling model using the architecture specified in Eq. (1) is unrealistic as it allows for the possibility of exponential intensification as a storm moves inland.

Fig. 7.

Distribution of the filling rate for the modeled (black) and historical (white) storms. The dashed line highlights the separation between negative and positive filling rates.

Fig. 7.

Distribution of the filling rate for the modeled (black) and historical (white) storms. The dashed line highlights the separation between negative and positive filling rates.

d. Predictors

Building on existing knowledge of the processes driving the decay of tropical cyclones after landfall, we considered the following candidates for inclusion in the predictor vector x(i), as well as the interactions between them:

  • ΔP0, the difference between the minimum central pressure at landfall and the corresponding estimate of environmental pressure. Intensity at landfall was found to impact the filling rate as early as Malkin (1959) and was confirmed in more recent works.

  • RMW0, the radius of maximum wind at landfall. Vickery (2005) included this parameter in his model to account for the faster filling of small storms whose inner core is entirely overland faster than larger storms, cutting off the heat fluxes in a shorter time.

  • Vt0, the translational speed at landfall. Following a similar argument to that for RMW0, Vickery (2005) also showed that fast-moving storms decay faster.

  • P0 × Vt0)/RMW0, the composite predictor proposed by Vickery (2005) to account for three parameters without decreasing the residual degrees of freedom of the model.

  • ΔP−6, the intensity change in the 6 h preceding landfall, as storms in the process of intensification are likely to behave differently at landfall. This predictor has not previously been explored in the empirical hurricane decay model literature.

  • Awater, the relative area of the storm over water, within 2 × RMW of the center. This predictor can account for the fact that if a large fraction of the storm remains over water, it may sustain its intensity as suggested by Kaplan and DeMaria (1995). The choice of 2 × RMW is made following DeMaria et al. (2006).

  • , the proportion of the right-hand side of the storm over water minus that on the left side. This would account for the angle between the track and the coastline, as suggested by Batts et al. (1980), as it is thought that a storm making landfall with its right-hand side (where the strongest winds occur) over water would retain more energy.

  • Arough, Asmooth, the relative area of the storm over rough or smooth terrain, within 2 × RMW of the center. Roughness is characterized using land-use maps, with urban and forested areas (and, generally speaking, any area with a surface roughness higher than 0.20 m) being considered as rough. These parameters would account for the components of the decay attributed to friction and are novel relative to the works of Vickery, and Kaplan and DeMaria.

For the three latter predictors (related to the area of the storm over various types of surface), the average over the first 4 h after landfall is used as all storms remain overland for that period of time. To ensure compatibility, the land-use information for both the historical and modeled storms is based on that of the WRF model [24-category U.S. Geological Survey (USGS) land-use land-cover data interpolated onto the WRF grid].

A set of 899 possible combinations of the above predictors are tested for the statistical models to be presented below (architectures 3 and 4). All possible main effect combinations are considered with the following exceptions:

  • P0 × Vt0)/RMW0, cannot appear in a model where ΔP0, RMW0, or Vt0 are present, and

  • Awater, Arough, and Asmooth cannot all explicitly appear in a model due to the constraint that they sum up to one (i.e., one of them must be designated the baseline).

Only two-way interactions are allowed and a variable can only appear in one interaction. Interactions with Arough and Asmooth are omitted.

e. Architecture of the statistical models

We investigate the performance of four model architectures:

  1. a baseline model without predictors that uses the out-of-sample mean of the estimated filling rates (j), ji to predict P(i, t), the intensity after landfall of storm i;

  2. a Gaussian linear model based only on the composite predictor (ΔP0 × Vt0)/RMW0, to predict α(i) (provided for comparison with Vickery 2005);

  3. a Gaussian linear model based on the optimal combination of predictors from the above list; and

  4. a mixed logistic–gamma model based on the optimal combination of predictors from the above list.

Model 4 separates the probability of a storm not filling after landfall and the distribution of the filling rate when it is nonzero. Let the probability of a nonzero filling rate be denoted by p(i) = P[α(i) ≠ 0] and let α*(i) = α(i) when α(i) ≠ 0. To ensure that the probability p(i) is constrained to lie between zero and one, it is modeled via logistic regression such that

 
formula

where xpT(i) is the predictor vector for p(i) and bp is the corresponding parameter vector. The filling rate α*(i) is assigned a gamma distribution with a constant dispersion parameter across all storms and a log link function to ensure a positive value for the expectation of α*(i). Thus,

 
formula

where xα*(i) is the predictor vector for α*(i) and bα* is the corresponding parameter vector. This results in an overall mean given by

 
formula

For more details on generalized linear models, which the models above are examples of, the reader is referred to Dobson (2001).

The mixed logistic–gamma model architecture allows storms not to fill after landfall but does not allow for overland intensification. As mentioned above, the exponential decay formulation is unsuitable for capturing slight intensification. Therefore, when fitting the mixed logistic–gamma models to the numerical ensemble of storms, the six negative (i) are set to 0. In the case of the historical storms where there are no such cases, the mixed logistic–gamma model is replaced by a gamma model only.

f. Model evaluation and fitting

Each statistical model (defined as a model architecture along with a combination of predictors) is fitted on both datasets and judged by its out-of-sample ability to predict historical storms. This is evaluated by the mean squared error (MSE) between the predicted and observed pressure time series after landfall, a cost function that ensures that strong storms have more influence on model selection. When the training sample is constituted of the historical storms, we use leave-one-out cross validation (i.e., fit using all but one of the historical storms and then measure the MSE of the predicted pressure time series for the storm left out of the sample—this process is repeated for each historical storm in turn). When we use the bogus storms as a training sample, we build only one model and evaluate it against all the historical storms. This procedure is repeated for all possible combinations of the predictors introduced in section 3d and model architectures 2–4.

In all cases that involve the computation of model coefficients (i.e., all model architectures except model 1), the model is fitted via maximum likelihood estimation with (i) in lieu of α(i).

g. Results

For each model we compute the square root of the average MSE obtained for each historical storm. The root-mean-squared error (RMSE) between the observed pressure time series and those corresponding to the best fit (i) is the minimum obtainable value, given the historical data described above and the exponential decay structure defined by Eq. (1). This benchmark RMSE is 3.19 hPa.

Table 4 gives the scores of the four model architectures when fitted using 1) the historical storms and 2) the bogus storms. For each model architecture, we retain only the optimum predictor combination (defined as the combination that gives the smallest RMSE). Hence, for a given model architecture, the optimum model can have different predictors whether it was trained with the historical or modeled storms.

Table 4.

RMSEs (hPa) between observed and predicted time series of central pressure after landfall for historical storms, whether the model is trained on historical (out of sample) or bogus storms.

RMSEs (hPa) between observed and predicted time series of central pressure after landfall for historical storms, whether the model is trained on historical (out of sample) or bogus storms.
RMSEs (hPa) between observed and predicted time series of central pressure after landfall for historical storms, whether the model is trained on historical (out of sample) or bogus storms.

Model 1—with no predictors—does not perform well in predicting the filling rate of the historical storms. It is worth noting that the score for model 1 is better when trained using the historical storms, while all the other models are better when trained using the numerical storms. This illustrates that the relationships between the filling rate and the predictors are well captured by the numerical ensemble.

All the models that include predictors perform better than the out-of-sample mean. Significant improvement is obtained using the composite predictor introduced by Vickery (2005) that combines the role of intensity, size, and translational velocity of the storm as was done in model 2.

However, it appears that the model can be further improved by using multiple predictors; this can be seen in the results for model 3. Note that we can increase the number of predictors without risk of overfitting because we rely on an out-of-sample procedure to test the model performances.

Finally, the architecture is refined by using a mixed logistic and gamma model to capture simultaneously storms not decaying after landfall and those with a nonzero filling rate. As noted in section 3e, when training this model on the historical set (in which all storms fill after landfall), the logistic part is ignored. Any change between models 3 and 4 is then essentially due to the change from a Gaussian to gamma distribution. We note that the RMSE does not provide a comprehensive comparison between both models because this measure is based only on the predicted mean rather than the distribution in full. A Gaussian and a gamma model can have very similar scores in terms of RMSE, and yet yield very different simulated intensities as the Gaussian model allows for a negative filling rate and hence exponential intensification.

Using a mixed logistic–gamma model (model 4) trained on the numerical storms gives the best results overall with an RMSE of 4.26 hPa, 34% greater than the benchmark RMSE (best fit). This result can be compared to the RMSE of model 2 with a single composite predictor fitted on historical storms (5.19 hPa) that is 63% greater than the benchmark. It is worth noting that the RMSEs for model 4 calculated using only the hurricanes in the set (i.e., omitting the tropical storms) are 5.57 and 4.82 hPa for the historical and bogus storms, respectively. These results provide confirmation that the reported improvements to the filling model are not driven by the weaker storms alone.

A comparison of the optimum statistical filling models trained on the historical and modeled storms is provided in Fig. 8, which shows the modeled MSE for each historical storm versus the MSE of the best fit. The filling model trained on bogus storms performs significantly worse than the model trained on historical storms only for one storm: Humberto (2007).

Fig. 8.

MSEs between the observed postlandfall pressure time series and those predicted by the mixed logistic–gamma model trained on the historical (squares) or bogus storms (stars) vs the MSE of the best exponential fit.

Fig. 8.

MSEs between the observed postlandfall pressure time series and those predicted by the mixed logistic–gamma model trained on the historical (squares) or bogus storms (stars) vs the MSE of the best exponential fit.

The optimal predictor variables and parameter estimates for the logistic model and the gamma model when trained on the numerical storms are given by the following expressions, respectively, where the bars indicate that all variables have been normalized:

 
formula

and

 
formula

The shape parameter for the gamma distribution is estimated to be 3.81.

Although the mixed logistic–gamma model removes the possibility of negative filling rates, which can arise with the Gaussian model, it still has a nonzero, albeit slight, density at α = 0 even for intense storms. Therefore, if used for simulation rather than prediction, the model above should be truncated at appropriate percentiles (e.g., 2.5% and 97.5%).

h. Discussion

The qualitative effects of the predictors in the logistic and gamma components of the model act consistently with the existing knowledge of the main drivers of storm decay after landfall. This is illustrated in Fig. 9, which shows the predicted p(i) (Figs. 9a–c) and α(i) (Figs. 9d–h) for the 24 historical storms against their properties at landfall.

Fig. 9.

Estimates of the probability of (a)–(c) a nonzero filling rate, p, and (d)–(h) the mean filling rate, E(α) for the historical storms. These are based on the mixed logistic–gamma models trained on the historical (squares) or bogus storms (stars) [corresponding to Eqs. (5) and (6)] and estimates are plotted against storm characteristics: central pressure, RMW and translational speed at landfall; proportion of the storm over water (Awater); difference of right- and left-hand sides over water .

Fig. 9.

Estimates of the probability of (a)–(c) a nonzero filling rate, p, and (d)–(h) the mean filling rate, E(α) for the historical storms. These are based on the mixed logistic–gamma models trained on the historical (squares) or bogus storms (stars) [corresponding to Eqs. (5) and (6)] and estimates are plotted against storm characteristics: central pressure, RMW and translational speed at landfall; proportion of the storm over water (Awater); difference of right- and left-hand sides over water .

As all historical storms fill after landfall, there is no corresponding model for p(i) using the historical record. The model trained with the bogus storms allows for the probability of not filling after landfall, with this probability decreasing for intense storms (Fig. 9a) and those that retain only a small area over water after landfall (Fig. 9b). Understanding of the role of the translational velocity of the storm is made difficult by the interaction with Awater (which will decrease when a storm quickly moves inland). One would expect that high Vt0 favors the decay of a storm; however, as seen in Fig. 9c, a very fast-moving storm like Bonnie (2004) with Vt0 = 12.5 m s−1 has a greater probability of not decaying after landfall. However, it is worth noting that 1) higher translational velocities of tropical cyclones can be an indication of extratropical transition and 2) transitioning makes the intensity of the storm less sensitive to the landfall. The cyclone phase space (Hart 2003) analysis of Bonnie (2004) (information available online at moe.met.fsu.edu/cyclonephase) indicates that it was undergoing extratropical transition at the time of landfall in the Gulf of Mexico.

A slower filling rate is predicted for very weak storms (Fig. 9d). The sensitivity to the intensity at landfall is perhaps smaller than expected from previous works, probably because additional predictors are accounted for. The time needed for the storm to be isolated from the latent and sensible heat of the ocean clearly plays the most important role, as can be seen from the magnitude of the coefficient for RMW0 in the model, as well as from the sensitivity to and Vt0 in Figs. 9e–g. Surface roughness is also identified as a key factor (Fig. 9h).

Figure 10 presents the minimum central pressure time series after landfall for the 14 hurricanes in the historical dataset, together with the central pressure predicted by i) (the best fit) and the filling rate predicted by the optimal mixed logistic–gamma model. The pressure ranges corresponding to the central 25% and 75% of the modeled distribution for each storm are also shown. The plots illustrate that while the statistical model does not explain all of the variance in the historical filling rates, all of the (i) fall within the central 75% of the modeled distribution for α(i).

Fig. 10.

Observed (black) minimum central pressure (hPa) of historical storms, as predicted using (i): the best fit (blue dashed) and as predicted by the optimum mixed logistic–gamma model trained on the bogus storms (red). The ranges of pressures corresponding to the central 75% and 25% of the modeled distributions for each storm are depicted as the light and dark gray bands, respectively. The dotted green lines give the best fit using only the first 12 h of each time series.

Fig. 10.

Observed (black) minimum central pressure (hPa) of historical storms, as predicted using (i): the best fit (blue dashed) and as predicted by the optimum mixed logistic–gamma model trained on the bogus storms (red). The ranges of pressures corresponding to the central 75% and 25% of the modeled distributions for each storm are depicted as the light and dark gray bands, respectively. The dotted green lines give the best fit using only the first 12 h of each time series.

Further inspection of Fig. 10 shows that the assumption of a constant exponential decay is not adequate for all storms (e.g., Andrew 1992). This is illustrated by looking at the predicted time series of central pressures when using filling rates fitted on the first 12 h after landfall only. While this approach provides a better fit directly after landfall, it is however detrimental to the reproduction of the full 48 h as the RMSE of the best fit increases from 3.19 to 5.70 hPa. Thus, when using a constant decay rate, there can be a trade-off between model performance immediately after landfall and beyond.

4. Conclusions

The work presented here makes use of a numerical weather prediction model to enhance statistical models of the decay of tropical cyclones after landfall. This new statistical model could contribute to the improvement of intensity forecasting, emergency preparedness, and estimation of hazard return periods over land.

To generate a numerical ensemble of artificial—yet physically consistent—tropical cyclones, bogus vortices were embedded in gridded reanalyses for targeted favorable meteorological situations and these modified analyses were subsequently used to initialize a limited-area numerical model. The resulting ensemble of artificial storms was used to train statistical models of the intensity decay of tropical cyclones after landfall. To prevent introducing biases when using the artificial storms as the training dataset, statistical models were tested against the historical record. Models trained on the historical dataset were tested out of sample for a fair comparison and to ensure the resulting statistical model could predict future events.

This work builds on that of Vickery (2005), with the central pressure being modeled to reach the environmental pressure with an e-folding decay. We found the optimal model for the filling rate to be one that combines a logistic component—to account for the probability of a storm not filling after landfall—and a gamma component to estimate the filling rate if nonzero. Using the bogus ensemble, we fitted a model that better predicts historical events than the historical record alone.

The qualitative behavior of this model corroborates our current understanding of the main drivers of the decay of tropical cyclones after landfall. As pointed out by Vickery (2005), intensity, translational speed, and size all play important roles. In addition, the larger dataset provided by the numerical ensemble allowed us to explore further variables: the relative area of a storm over both water and rough terrain in the first few hours after landfall are also useful predictors of tropical cyclone decay in the region of New Orleans.

The fact that the empirical decay model trained on numerical storms successfully captured historical events provides a posterior validation of the numerical ensemble and demonstrates that, even using a relatively inexpensive model setup, the NWP model can capture the processes involved in hurricane decay after landfall. This validation is particularly important for two reasons. First, the quality and representativeness of the ensemble is sensitive to the range of the parameter space explored to generate it and it is otherwise difficult to assess when that coverage becomes satisfactory. Second, it is difficult to assess the forecasting skill of the WRF by investigating a limited number of specific storms. By demonstrating that the statistical model fitted using the numerical ensemble has greater predictive skill for historical storms, we have shown that the benefits of a larger, albeit artificial, training set outweigh the effects of the biases within the ensemble (seen in, e.g., radius of maximum winds).

The scope of this study was limited to the U.S. coastline along the Gulf of Mexico in the region of New Orleans. However, Schwerdt et al. (1979) observed that the filling rate of tropical cyclones exhibits a geographical variability and the framework presented here can be extended to other geographical areas to explore the role of other processes involved in the overland decay of tropical cyclones. It could, for example, be used to investigate the behavior of storms undergoing extratropical transition when landfalling in the northeastern United States.

Acknowledgments

We appreciate the assistance of Robert E. Hart, who provided helpful comments to improve this manuscript.

REFERENCES

REFERENCES
Batts
,
M. E.
,
M. R.
Cordes
,
C. R.
Russel
,
J. R.
Shaver
, and
E.
Simiu
,
1980
:
Hurricane wind speeds in the United States.
National Bureau of Standards Rep. BSS-124, 50 pp
.
Braun
,
S. A.
, and
W. K.
Tao
,
2000
:
Sensitivity of high-resolution simulations of Hurricane Bob (1991) to planetary boundary layer parameterizations.
Mon. Wea. Rev.
,
128
,
3941
3961
.
Davis
,
C.
, and
Coauthors
,
2008
:
Prediction of landfalling hurricanes with the advanced hurricane WRF model.
Mon. Wea. Rev.
,
136
,
1990
2005
.
DeMaria
,
M.
,
J. A.
Knaff
, and
J.
Kaplan
,
2006
:
On the decay of tropical cyclone winds crossing narrow landmasses.
J. Appl. Meteor. Climatol.
,
45
,
491
499
.
Demuth
,
J.
,
M.
DeMaria
, and
J. A.
Knaff
,
2006
:
Improvement of Advanced Microwave Sounder Unit tropical cyclone intensity and size estimation algorithms.
J. Appl. Meteor. Climatol.
,
45
,
1573
1581
.
Dobson
,
J.
,
2001
:
An Introduction to Generalized Linear Models.
2nd ed. Chapman and Hall/CRC, 225 pp
.
Georgiou
,
P. N.
,
1985
:
Design windspeeds in tropical cyclone-prone regions. Ph.D. thesis, University of Western Ontario, London, ON, Canada, 295 pp
.
Goerss
,
J. S.
, and
R. A.
Jeffries
,
1994
:
Assimilation of synthetic tropical cyclone observations into the Navy Operational Global Atmospheric Prediction System.
Wea. Forecasting
,
9
,
557
576
.
Hart
,
R. E.
,
2003
:
A cyclone phase space derived from thermal wind and thermal asymmetry.
Mon. Wea. Rev.
,
131
,
585
616
.
Ho
,
F. P.
,
J. C.
Su
,
K. L.
Hanevich
,
R. J.
Smith
, and
F. P.
Richards
,
1987
:
Hurricane climatology for the Atlantic and Gulf coasts of the United States.
NOAA Tech. Rep. NWS38, Federal Emergency Management Agency, Washington, DC, 195 pp
.
Holland
,
G. J.
,
1983
:
Tropical cyclone motion: Environmental interaction plus a beta effect.
J. Atmos. Sci.
,
40
,
328
342
.
Jarvinen
,
B. R.
,
C. J.
Neumann
, and
M. A. S.
Davis
,
1984
:
A tropical cyclone data tape for the North Atlantic Basin, 1886–1983: Contents, limitations, and uses.
NOAA Tech. Memo. NWS NHC 22, Coral Gables, FL, 21 pp
.
Kalnay
,
E.
, and
Coauthors
,
1996
:
The NCEP/NCAR 40-Year Reanalysis Project.
Bull. Amer. Meteor. Soc.
,
77
,
437
471
.
Kaplan
,
J.
, and
M.
DeMaria
,
1995
:
A simple empirical model for predicting the decay of tropical cyclone winds after landfall.
J. Appl. Meteor.
,
34
,
2499
2512
.
Kaplan
,
J.
, and
M.
DeMaria
,
2001
:
On the decay of tropical cyclones after landfall in New England.
J. Appl. Meteor.
,
40
,
280
286
.
Kurihara
,
Y.
,
M. A.
Bender
, and
R. J.
Ross
,
1993
:
An initialization scheme of hurricane models by vortex specification.
Mon. Wea. Rev.
,
121
,
2030
2045
.
Low-Nam
,
S.
, and
C.
Davis
,
2001
:
Development of a tropical cyclone bogusing scheme for the MM5 system.
Preprints, 11th PSU–NCAR Mesoscale Model Users’ Workshop, Boulder, CO, NCAR, 130–134
.
Malkin
,
W.
,
1959
:
Filling and intensity changes in hurricanes on land.
National Hurricane Research Project Rep. 34, 18 pp
.
Marks
,
D. G.
,
1992
:
The beta and advection model for hurricane track forecasting.
NOAA Tech. Memo. NWS NMC-70, 89 pp
.
Miller
,
B. I.
,
1963
:
On the filling of tropical cyclones over land, with particular reference to Hurricane Donna of 1960.
National Hurricane Research Project Rep. 66, U.S. Weather Bureau, 82 pp
.
Nolan
,
D. S.
,
M. T.
Montgomery
, and
L. D.
Grasso
,
2001
:
The wavenumber-one instability and trochoidal motion of hurricane-like vortices.
J. Atmos. Sci.
,
58
,
3243
3270
.
Nolan
,
D. S.
,
J. A.
Zhang
, and
D. P.
Stern
,
2009a
:
Evaluation of planetary boundary layer parameterizations in tropical cyclones by comparison of in situ observations and high-resolution simulations of Hurricane Isabel (2003). Part I: Initialization, maximum winds, and the outer-core boundary layer.
Mon. Wea. Rev.
,
137
,
3651
3674
.
Nolan
,
D. S.
,
J. A.
Zhang
, and
D. P.
Stern
,
2009b
:
Evaluation of planetary boundary layer parameterizations in tropical cyclones by comparison of in situ observations and high-resolution simulations of Hurricane Isabel (2003). Part II: Inner-core boundary layer and eyewall structure.
Mon. Wea. Rev.
,
137
,
3675
3698
.
Powell
,
M. D.
,
S. H.
Houston
,
L. R.
Amat
, and
N.
Morisseau-Leroy
,
1998
:
The HRD real-time hurricane wind analysis system.
J. Wind Eng. Indust. Aerodyn.
,
77&78
,
53
64
.
Schwerdt
,
R. W.
,
F. P.
Ho
, and
R. R.
Watkins
,
1979
:
Meteorological criteria for standard project hurricane and probable maximum hurricane windfields, Gulf and East Coasts of the United States.
NOAA Tech. Rep. NWS 23, Silver Spring, MD, 317 pp
.
Skamarock
,
W. C.
,
J. B.
Klemp
,
J.
Dudhia
,
D. O.
Gill
,
D. M.
Barker
,
W.
Wang
, and
J. G.
Powers
,
2005
:
A description of the Advanced Research WRF version 2.
NCAR Tech Note NCAR/TN-468+STR, 100 pp
.
Stohl
,
A.
, and
P.
Seibert
,
1998
:
Accuracy of trajectories as determined from the conservation of meteorological tracers.
Quart. J. Roy. Meteor. Soc.
,
124
,
1465
1484
.
Tuleya
,
R. E.
,
1994
:
Tropical storm development and decay: Sensitivity to surface boundary conditions.
Mon. Wea. Rev.
,
122
,
291
304
.
Vickery
,
P. J.
,
2005
:
Simple empirical models for estimating the increase in the central pressure of tropical cyclones after landfall along the coastline of the United States.
J. Appl. Meteor.
,
44
,
1807
1826
.
Vickery
,
P. J.
, and
L. A.
Twisdale
,
1995
:
Wind-field and filling models for hurricane wind-speed predictions.
J. Struct. Eng.
,
121
,
1700
1709
.
Xiao
,
Q.
,
Y. H.
Kuo
,
Y.
Zhang
, and
D. M.
Barker
,
2006
:
A tropical cyclone bogus data assimilation scheme in the MM5 3D-Var system and numerical experiments with Typhoon Rusa (2002) near landfall.
J. Meteor. Soc. Japan
,
84
,
671
689
.
Zhu
,
P.
,
2007
:
High resolution WRF simulation of landfall hurricane boundary layer winds and turbulent structures.
Proc. Eighth WRF Users’ Workshop, Boulder, CO, NCAR, P1.27. [Available online at http://www.mmm.ucar.edu/wrf/users/workshops/WS2007/abstracts/p1-27_Zhu.pdf]
.

Footnotes

Corresponding author address: Nadja Leith, Risk Management Solutions Ltd., 30 Monument St., London EC3R 8NB, United Kingdom. Email: nadja.leith@rms.com