In Part I of this study, a new algorithm for retrieving the latent heat field in tropical cyclones from airborne Doppler radar was presented and fields from rapidly intensifying Hurricane Guillermo (1997) were shown. In Part II, the usefulness and relative accuracy of the retrievals is assessed by inserting the heating into realistic numerical simulations at 2-km resolution and comparing the generated wind structure to the radar analyses of Guillermo. Results show that using the latent heat retrievals as forcing produces very low intensity and structure errors (in terms of tangential wind speed errors and explained wind variance) and significantly improves simulations relative to a predictive run that is highly calibrated to the latent heat retrievals by using an ensemble Kalman filter procedure to estimate values of key model parameters.
Releasing all the heating/cooling in the latent heat retrieval results in a simulation with a large positive bias in Guillermo’s intensity that motivates the need to determine the saturation state in the hurricane inner-core retrieval through a procedure similar to that described in Part I of this study. The heating retrievals accomplish high-quality structure statistics by forcing asymmetries in the wind field with the generally correct amplitude, placement, and timing. In contrast, the latent heating fields generated in the predictive simulation contain a significant bias toward large values and are concentrated in bands (rather than discrete cells) stretched around the vortex. The Doppler radar–based latent heat retrievals presented in this series of papers should prove useful for convection initialization and data assimilation to reduce errors in numerical simulations of tropical cyclones.
To predict the track, intensity, and structure of tropical cyclones (TCs), the meteorological community relies heavily on numerical simulations. While great progress has been made in our ability to simulate these phenomena, many sources of uncertainty (e.g., initial conditions, numerical approximations of the governing equations, and physical parameterizations) remain that limit the quantitative and qualitative use of model data. The primary driving force of TC intensity and structure change is the release of latent heat in clouds where the moist entropy flux comes from the thermodynamic disequilibrium at the ocean–atmosphere interface (Charney and Eliassen 1964; Emanuel 1986). This physical process is parameterized in current numerical models (e.g., Reisner et al. 1998) with few studies rigorously comparing the simulated fields to observations. Rogers et al. (2007) performed a detailed comparison of vertical velocity and hydrometeor fields from two TC numerical simulations with aircraft Doppler radar and microphysical probe measurements. For the vertical velocity field, they found that the general distribution was similar but the magnitudes were weaker in the simulations. For the hydrometeor field, Rogers et al. (2007) confirmed that their simulated radar reflectivities are much higher than observed and, when combined with vertical velocity, have a direct connection to latent heating.
Several studies have documented considerable sensitivity to numerical model microphysical schemes when simulating TC intensity and structure change. For example, McFarquhar et al. (2006) found that choice of microphysics parameterization (including alterations to the basic condensation scheme) led to variations in simulated storm intensity by nearly 10 hPa. Uncertainties in graupel characteristics were found to also produce large changes in storm intensity and are likely one of the culprits behind the consistent and significant positive bias of radar reflectivities when compared to observations (McFarquhar et al. 2006; Rogers et al. 2007). Furthermore, Pattnaik and Krishnamurti (2007) conducted several microphysical sensitivity experiments with a numerical model and found that omission of cooling processes associated with the melting of mixed-phase hydrometeors and the evaporation of rainwater can produce important changes to a storm’s intensity. Cloud microphysical heating is a complex, dynamic process that is highly coupled to various other physical parameterizations and numerical solution procedures, and it is not surprising that uncertainty in the schemes and their effects on storm intensity/structure exist.
One such physical parameterization that is coupled to microphysical heating, which can also produce large changes in the simulated intensity/structure of TCs when perturbed, is turbulence. Braun and Tao (2000) found that by changing the boundary layer scheme (the vertical mixing and surface fluxes of mass, momentum, and energy accomplished by turbulent eddies) in their simulations of Hurricane Bob (1991), minimum central pressures varied by up to 16 hPa and maximum winds by 15 m s−1. Using an axisymmetric numerical model, Bryan and Rotunno (2009) found that the maximum possible intensity of TCs is most sensitive to the intensity of turbulence in the radial direction (defined through the turbulent length scale). For larger values of the turbulent length scale (more intense turbulent activity), the radial gradients of angular momentum become weaker, which requires a concomitant response in the energy field to maintain thermal wind balance. This process results in increased surface pressures and reduced maximum winds (Bryan and Rotunno 2009).
Given the uncertainty in current model microphysical schemes, associated latent heating fields, and the coupling to the chosen turbulence parameterization, it is clear that work needs to be done to retrieve latent heating fields from observations and to examine their usefulness in a modeling framework. In Part I of this study (Guimond et al. 2011), a new algorithm for retrieving the latent heat field in TCs from airborne Doppler radar was presented and fields from rapidly intensifying Hurricane Guillermo (1997) were shown. Here, in Part II, the goal is to determine the value of the retrievals in a numerical modeling setting (e.g., convective initialization) as well as their relative accuracy in terms of reproducing observed wind fields. To this end, the latent heat retrievals are inserted into realistic numerical simulations at 2-km resolution and the wind fields generated from the heating are compared to the Doppler radar wind analyses of Guillermo. Because the latent heat in this simulation is known in advance, it is not coupled to the turbulence parameterization. Comparisons of the retrieved heating run to simulations that employ a very well-calibrated predictive model (including estimates of the turbulent length scale using an optimization procedure along with the latent heat retrievals; Godinez et al. 2012) with a microphysics parameterization are analyzed. The latent heat in this simulation is not known in advance and is thus coupled to the turbulence parameterization. These comparisons highlight important parts of both the latent heat retrieval algorithm and predictive modeling system needing more work.
The paper is organized as follows. In section 2, background on the numerical model, extensive initialization procedures, and the setup for each simulation are described. Section 3 shows the results of the simulations and their comparisons to Doppler radar observational analyses. In addition, the latent heating fields from the predictive simulation are compared to the retrievals in this section. Section 4 provides conclusions from the work and offers recommendations for improving the prediction of TC intensity and structure change.
2. Numerical model and setup for simulations
a. Numerical model
An overview of Hurricane Guillermo (1997), including a detailed explanation of a rapid intensification episode sampled by the National Oceanic and Atmospheric Administration (NOAA) P-3 aircraft, can be found in Reasor et al. (2009), Sitkowski and Barnes (2009), and Guimond et al. (2011). Reasor et al. (2009) and Guimond et al. (2011) describe an extensive set (10 composite1 periods) of observational analyses that have been constructed of Guillermo using the NOAA P-3 tail Doppler radars. Details on the wind fields and how they were constructed can be found in these two papers, especially in the appendix of Reasor et al. (2009).
The high gradient (HIGRAD) atmospheric model developed at the Los Alamos National Laboratory (LANL) was used to simulate the rapidly intensifying period of Guillermo. The HIGRAD model solves the three-dimensional (3D), rotating, compressible Navier–Stokes equations written in conservative form and framed in generalized coordinates with additional equations for cloud processes and turbulence [see Reisner et al. (2005) and Reisner and Jeffery (2009) for the complete equation set]. In this study, we only consider warm-rain cloud microphysical processes, which are described in the appendix of Reisner and Jeffery (2009). The reasons for choosing only warm-rain processes are described in the next section. The discrete model employs a semi-implicit time-stepping procedure (Reisner et al. 2005) along with a finite-volume approach in space computed on an A grid (collocated grid points). The quadratic upstream interpolation for convective kinematics including estimated streaming terms (QUICKEST; Leonard and Drummond 1995) was used to approximate advective terms with these quantities potentially limited by a flux-corrected transport procedure (Zalesak 1979).
b. Grid and settings for simulations
The model domain is 1276 km × 1276 km with a 120 km × 120 km region in the center with a constant 2-km horizontal resolution [“radar domain,” matches the Guillermo airborne Doppler radar analysis from Part I (Guimond et al. 2011)] stretching to about 15-km horizontal resolution at the boundaries using a differentiable hyperbolic tangent function. The first model level is at 35 m above the ocean surface and the vertical domain stretches to 22 km at the model top for a total of 71 levels. A full (varying in latitude over the domain) Coriolis force was used with the domain center at a latitude of 22°N (center of Guillermo at the start of aircraft penetrations). A time step of 2 s was used for all simulations.
where Δi represents the grid spacing in each of the three Cartesian directions (i = 1, 2, 3) and Ci is a time scale (tuning coefficient) equal to 10−3 s−1 in the horizontal directions and 10−2 s−1 in the vertical direction. In numerical models, eddy diffusivities characterize the strength of subgrid scale mixing, which is accomplished mainly by turbulent eddies for the 2-km resolution radar portion of the model domain used in this study.
Values of κ in the horizontal located in the radar domain are 4000 m2 s−1. In the vertical, κ varies between about 30 m2 s−1 near the ocean surface and about 3000 m2 s−1 at the model top. At the ocean surface, the eddy viscosity is expressed as
where is the horizontal wind speed and B is a length scale (tuning coefficient) equal to 25 m. We chose to use the above-mentioned expressions for the eddy viscosity/diffusivity in order to simplify the turbulence model so that changes in the intensity/structure of the simulated storm could be more easily ascribed to the latent heat forcing.
c. Environmental and boundary conditions
The environmental potential temperature, density, and water vapor initial conditions are taken from 1.125° European Centre for Medium-Range Weather Forecasts (ECMWF) operational analyses closest in time to the start of the aircraft observations of Guillermo (~1855 UTC 2 August 1997). Newtonian relaxation zones on the sides and top of the model domain were used to nudge the fields back toward the ECMWF environmental conditions and control the reflection of oscillations into the domain interior. A high-resolution (0.25°, daily) sea surface temperature dataset that relies on passive microwave satellite data (Reynolds et al. 2007) was used to initialize the ocean.
d. Vortex initialization
The initial vortex is taken directly from the first Doppler radar composite of Guillermo. Each radar composite (10 total) was constructed using data collected by the aircraft over the approximately 20-min period it took to sample the eyewall of Guillermo. Note that regions void of reflectivity in the radar domain were filled in with a Laplacian filter in the variational wind synthesis (Reasor et al. 2009). The radar composites are considered valid at the center of these approximately 20-min periods, which is 1855 UTC 2 August 1997 for the first one used to initialize the vortex (Reasor et al. 2009). This radar-constructed vortex only covers the inner portion of the model domain and therefore an extended radar vortex was created according to the following procedure. First, the Doppler radar analyzed winds are interpolated to a cylindrical grid extending out to the edge of the model domain with 2-km radial and 5° azimuthal spacing. Next, the ECMWF analyses are used to compute the environmental, horizontal winds impacting Guillermo by averaging in annuli around the analysis-depicted storm out to a 500-km radius. The annulus averaging is weighted with a large weight given to outer radii and a small weight given to inner radii. This procedure effectively removes the symmetric part of the vortex from the ECMWF analyses, leaving a reasonable estimate of the environmental winds impacting the storm (Hanley et al. 2001). The outer radius of the cylindrical grid is then set to the computed environmental winds and a smooth, exponentially decaying function is used to merge the edge of the radar domain into the environmental winds at each radial. Finally, the merged winds are interpolated back to the model Cartesian grid.
Figure 1 reveals the 3D wind speed structure of the merged vortex on a subset of the full model grid (500 km on each side, but still 22 km in the vertical). Specifically, this figure shows several isosurfaces of wind speed with opacity scaling that allows a view of the inner core of the storm (from Doppler radar analyses) as well as the blending into the environment (from ECMWF analyses). The merged vortex is able to retain the detailed, asymmetric structure of the inner core of Guillermo (see the arrow pointing to the Doppler radar section of Fig. 1) while gradually relaxing the storm into the environment at the edges of the model domain. Figure 2 shows a horizontal cross section through the merged vortex at about 1-km altitude on the full model grid for reference.
The vortex is introduced into the model using a dynamic initialization procedure in which forcing terms are added to the horizontal momentum equations as shown:
where Fi represents all the standard forcing on the right-hand side of the horizontal momentum equations, is the horizontal velocities from the merged vortex, ui is the model velocities, ρ is the dry air density, and G is the chosen nudging coefficient of 10−3 s−1. The dynamic initialization (or “nudging”) procedure has been used successfully for over 30 yr (e.g., Hoke and Anthes 1976), including in studies of tropical cyclones (Chang 1981; Krishnamurti et al. 1998; Tory et al. 2006). The goal of the process is to develop a set of initial conditions, based on observations, that are balanced (i.e., steady state) with respect to the full model equations. Several other methods exist for vortex initialization, such as the “bogus vortex” method and 3D/4D variational approaches (e.g., Kurihara et al. 1990; Navon et al. 1992). We chose the nudging method for both its simplicity and effectiveness.
In this study, only the merged Guillermo vortex (Figs. 1 and 2) is nudged into the model as described above, allowing the other model variables (such as vertical velocity, potential temperature, density, and water vapor) to develop in a consistent and balanced manner with the horizontal momentum field. The nudging coefficient was chosen by trial and error, which is not without precedent (Krishnamurti et al. 1998), although more sophisticated methods have been used (Zou et al. 1992). Using too large of a coefficient produced large-amplitude gravity wave oscillations along with a perpetually unbalanced mass and momentum field. In contrast, using too small of a value did not provide enough forcing to spin up a quasi-balanced vortex in a reasonable amount of time. A value of 10−3 s−1 seemed to provide a good compromise between these two effects. Note that nudging different variables with a different observational dataset and especially with a different numerical model and setup will likely change the optimal coefficient (Hoke and Anthes 1976; Zou et al. 1992).
The model is integrated using Eq. (2) for a period of 9–10 h, at which time the vortex reached a steady-state minimum pressure of about 958 hPa, which matches observations of the storm at this time (Mayfield 2011). Note that during the initialization, the model microphysical scheme is enabled, but the forcing associated with heat released from phase changes is set to zero. This allows some consistency between the spunup vortex and the moisture field while not allowing for heat release that would change the wind field from that which was specified.
Figure 3 shows a time series of minimum pressure for the dynamic initialization of the merged Guillermo vortex. After a period of about 2 h, the minimum pressure begins to asymptotically approach the observed value of about 958 hPa. An offline, iterative thermal wind balance solver using the symmetric part of the merged vortex and an ECMWF environmental sounding as input revealed that the model is approaching thermal wind balance and, in terms of minimum pressure, is about 958 hPa for this vortex. For the purposes of the simulations in this paper, the dynamic initialization is stopped at 9 h and the nudging coefficient is set to zero. At this time, nearly the exact structure of the merged vortex (Figs. 1 and 2) exists in the model along with the potential temperature and density fields that hold the vortex in approximate thermal wind balance. Note that the vortex maintains a nearly balanced, steady-state structure in the model after the nudging coefficient is set to zero (see Fig. 3, between 9 and 10 h), indicating that noise from the initialization is negligible.
e. Description of simulations
1) The “retrieval” run
After the dynamic initialization process is stopped at 9 h, three simulations are spawned. In the first simulation, the latent heat retrievals presented in Part I of this study (Guimond et al. 2011) are used as forcing in the thermodynamic equation
where the F represents only the diffusive tendencies on potential temperature θ (the model microphysical scheme was allowed to operate, but the release of heat was shut off), which includes sensible heat fluxes from the ocean surface and FLH is the forcing from the latent heat retrievals (only enters into the radar portion of the domain). The variable f(t) represents the time evolution of the latent heat forcing, which over the first 10 min takes the form
with τ = 600 − t and α = 300 s. The function in Eq. (4) acts to smoothly ramp up the first latent heat composite [see Fig. 12, pass 1 in Guimond et al. (2011)] in a 10-min period. Note that, in practice, it was not necessary for Eq. (4) to have start and end points at exactly 0 and 1, respectively. After this interval, the function in Eq. (4) is replaced by a linear interpolation operator that transitions each latent heat composite over a 35-min period extending out to 5.25 h. This simulation will be called the “retrieval” run.
2) The “saturated” run
The second simulation is very similar to the first one except that every grid point in the Doppler radar analysis (corresponding to the center of the model domain) is assumed saturated when performing the latent heat retrieval, which releases heating/cooling wherever there is an updraft/downdraft, regardless of vertical velocity magnitude. The latent heat retrieval presented in Part I of this study and used in the retrieval run described above solves for the saturation state, which ultimately releases less heating/cooling in the radar portion of the model domain when compared with the saturation assumption. However, as Part I of this study demonstrates, assuming saturation over the entire inner core of TCs is not accurate because for |w| < ~5 m s−1, there is large variability in the saturation state of the air (Guimond et al. 2011). The present simulation is conducted in order to continue to assess the need for computing the saturation state over the inner core of Guillermo and to determine the accuracy of the method relative to the saturated case. This simulation will be called the “saturated” run.
3) The “predictive” run
In the third simulation, the latent heat retrievals are not used, but the heating produced through the model’s microphysical scheme is turned on instead, which can be represented through the boldface forcing variable shown in Eq. (3). Only warm-rain processes were considered in this simulation for two reasons. First and foremost, sensitivity tests with mixed-phase microphysics (Reisner et al. 1998) revealed small differences in wind speed magnitude and structure (not shown) relative to the warm-rain-only results. This is consistent with the dual-polarization radar study of deep convection by Tong et al. (1998) and the Hurricane Andrew (1992) modeling results of Zhang et al. (2002). Second, the latent heat forcing used in the retrieval run described above (including the algorithm in Part I) only incorporates warm-rain processes, and so warm-rain-only simulations were conducted here to enable fair comparisons.
The microphysics scheme relies partly on the water vapor field to release energy. Although a water vapor field consistent with Guillermo’s vortex was produced during the dynamic initialization process, this moisture field is only representative of the basic state (vortex) and not the perturbations (convection). Figure 4a shows a horizontal cross section of the water vapor mixing ratio at 5-km height after 9 h of vortex nudging, revealing a relatively featureless field with fairly low values throughout the domain. To assist the microphysics scheme with the placement and magnitude of water vapor that would result in the observed convection, the first latent heat composite [a condensation rate; see Fig. 12, pass 1 in Guimond et al. (2011)] is converted to a cloud water tendency using the expression
where Fqc is the cloud water mixing ratio tendency (kg kg−1 s−1), T is temperature, Cp is the specific heat of dry air at constant pressure, and Lc is the latent heat of condensation at 0°C. The converted cloud water was then assumed to have originated as a source of water vapor (from turbulent fluxes at the air–sea interface). This source of water vapor was added as a forcing term to the water vapor and mass continuity equations in the model over a 10-min period using the time evolution equation shown in Eq. (4).
Figure 4b shows the water vapor field at 5-km height after the dynamic initialization and the 10-min moisture forcing. The moisture field in Fig. 4b is clearly more realistic than that in Fig. 4a, showing an asymmetric distribution of water vapor that is consistent with observed convection at this time [see Fig. 1, pass 1 and Fig. 12, pass 1 in Guimond et al. (2011)]. After 10 min, the water vapor forcing is shut off and the model is allowed to run in a mode free from observational forcing with the microphysics scheme determining the release of heat. The large extent of the circulation initialized in the model could result in convection and latent heat release over large portions of the domain. To enable fair comparisons with the retrieval and saturated runs, the heating in the current simulation was only allowed over the radar portion of the model domain. This was accomplished by multiplying the microphysical heating, represented by the boldface term in Eq. (3), by an array that masks the outer portions of the domain.
Simulations that rely on the microphysics scheme for generating latent heat forcing are dependent not only on the moisture field but also turbulence parameters, including surface friction. All of these parameters are highly uncertain aspects of the modeling system that can have large impacts on the intensity and structure of a simulated storm (Emanuel 1995; Braun and Tao 2000; Bryan and Rotunno 2009). Godinez et al. (2012) used the latent heat retrievals presented in Part I of this study in a unique ensemble Kalman filter (EnKF) approach to estimate several key model parameters for HIGRAD simulations of Hurricane Guillermo (1997). Using a 120-member ensemble, Godinez et al. (2012) computed mean estimates of coefficients that determine the strength of (i) the ocean surface turbulent (subgrid) flux of water vapor, (ii) the ocean surface dissipation of momentum [see Eq. (1b) in this paper], (iii) the turbulent transfer of all variables throughout the model domain defined through the turbulent length scale, and (iv) the vertical wind shear affecting Guillermo. The coefficients estimated from the assimilation of the latent heat retrievals were found to produce the lowest overall error when used in a predictive simulation relative to the assimilation of the Doppler radar horizontal wind analysis (Godinez et al. 2012). For the third simulation in the present paper, we used the parameter estimates in (i)–(iii) along with a 1.5-order turbulent kinetic energy (TKE) turbulence closure [see Eq. (3) in Reisner and Jeffery (2009)] with the eddy viscosity/diffusivity in the horizontal expressed as
where Ci is equal to 10−4 s−1 in the horizontal directions and Ls is the turbulent length scale determined from Godinez et al. (2012). For this simulation, the eddy viscosity/diffusivity in the vertical only includes the TKE term in Eq. (6). At the ocean surface, the eddy viscosity for momentum is the same as that shown in Eq. (1b), only B is determined from the EnKF procedure. The wind shear coefficient was not used since a reasonable estimate was produced through the initialization procedure described above. We call this third simulation, which is a very well-calibrated modeling system, the “predictive” run.
A summary of the three simulations described above can be found in Table 1. Each simulation is run out to 6 h but only data up to 5.25 h are used for comparisons to the radar analyses. Data from each simulation is output every 5 min with comparisons to the analyses made every 35 min to approximate the time span between the centers of the radar composites (Reasor et al. 2009). In addition, at those comparison time intervals, the wind field output from each model simulation and the heating in the predictive simulation are averaged over a period of 20 min [see Table 1 of Reasor et al. (2009)] to approximate the time evolution inherent in the radar composites. Departures from this practice will be noted where they are appropriate.
Figure 5 provides a broad overview of the performance of each simulation described in section 2 in terms of minimum surface pressure along with best-track estimates from the National Hurricane Center (NHC), which are produced every 6 h (e.g., 1800, 0000 UTC), interpolated to the radar composite times. At the beginning of the time series, each simulation experienced an approximately 0.5-h period of slight weakening (or reduced intensification in the saturated run case), which is mainly due to the gradual introduction of heating into the model. The retrieval run performed quite well, capturing most of the observed deepening of Guillermo with an error of only about 5 hPa at the end of the simulation. However, on average the retrieval simulation was a bit too strong, possibly due to errors in the vertical velocity and the computation of saturation (Guimond et al. 2011). The saturated run is clearly much too strong, indicating that the inner core of Guillermo was not completely saturated. This is consistent with Fig. 2 in Guimond et al. (2011) and further highlights the need for an accurate estimation of the saturation state in the retrieval through a procedure similar to that shown in Part I of this study. The predictive run appears too weak by about 5–10 hPa early in the simulation. However, near the end of the run, the intensification of the vortex increases and the minimum pressure errors at 5.25 h are quite small. With only a few observations of Guillermo’s central pressure during this short period, caution must be taken when interpreting these results.
Comparisons are not made with the maximum wind speeds for two reasons: (i) the maximum wind speed is typically a volatile parameter that can reflect the chaos inherent in a single deterministic simulation, rendering comparisons uncertain; and (ii) the maximum wind speeds in the Guillermo Doppler analyses exhibited small variability [see Fig. 1b in Zou et al. (2010)] despite the observed rapid pressure falls during this period. It is possible that the Doppler analyses are not resolving the peak wind because of coarse resolution in space and time, smoothing, and surface clutter. It is also possible that the mass-momentum adjustment process is lagging during the observational sampling period. To avoid some of these uncertainties, we focus on mean and integrated measures of error.
Since the first Doppler radar wind composite was used to spin up Guillermo, comparisons of the model-generated wind fields to observations are made with the other nine composite periods [see Table 1 in Reasor et al. (2009)] every 35 min (with data averaged over a 20-min period as discussed above) out to 5.25 h. The storm moves during the simulation, so in postprocessing the model vortex is recentered in the domain using a wind centroid finder that minimizes the azimuthal variance of the wind speed. The data are then interpolated from the model grid to the Doppler analysis grid. Lastly, comparisons of the model fields to the radar analyses are only done where there is reflectivity in the radar observations.
One simple, relative error metric chosen to compare the simulations is expressed as
where XM = XM(x, y, z, t) is the model tangential wind speed and XO = XO(x, y, z, t) is the Doppler radar–analyzed tangential wind speed. We focus on tangential wind speeds in most of this paper because they are more accurate and have a much larger signal when compared to the radial winds in the observational analysis (Reasor et al. 2009). Figure 6a shows a time series of the error defined in Eq. (7) averaged over a volume that is defined as the radar portion of the model domain in the horizontal (see section 2b) and 1–12 km in height (ocean surface contamination was present below 1 km and relatively few hydrometeors existed above 12 km). This relatively large averaging volume was used to convey the overall performance of the simulations. Figure 6a reveals that the retrieval run produces the smallest errors over the course of the 5–6-h simulation, although there is a slight low bias in the generated tangential winds. Both the saturated and predictive simulations are much too strong, with errors near 25% at the end of the period. Figure 6b is the same as Fig. 6a, except that the absolute value of the error in Eq. (7) is computed before the volume averaging to reveal the total errors of the simulations regardless of sign. This figure shows that after about 1 h, the errors in the retrieval run are fairly steady (between 15% and 20%), while the errors in the saturated and predictive runs increase with time to about 30% at the end of the simulation.
Figure 6 is consistent with the results of Fig. 5 for the retrieval and saturated runs; that is, relative to observations, the retrieval run has the smallest errors over the entire simulation, while the saturated run has larger errors that grow quickly with time. Note that for the tangential wind speed errors, the predictive run performs very similar to the saturated run, generating a storm that is too strong. However, for minimum surface pressure (Fig. 5), the predictive run is either too weak or very close to the NHC observations. This contrast in model performance is because different error metrics will sometimes produce different results, especially when using diverse sources of observations.
Figure 7 presents a time series of the square of the linear sample correlation coefficient, defined as
which measures how well the simulations capture the variability in the Doppler analyses. The calculation is performed for each observational composite over the same volume of the radar portion of the model domain (number of grid points n = 60 × 60 × 12 = 43 200) as described above for Fig. 6. Here, represents the total wind speed of the observational analysis for each grid point in the volume, while is the volume-averaged total wind speed for the observations. The other variables represent model quantities. The spunup, merged Guillermo vortex does not explain 100% of the variability in the Doppler analyses (see t = 0 h in Fig. 7) because of several factors inherent to models (e.g., resolution, numerics) and because of the imperfection of the nudging process. However, an R2 of 80% is still an excellent value that captures all of the major asymmetries in the observed vortex. Figure 7 shows that the simulations that use a form of the latent heat retrievals for forcing (retrieval and saturated runs) have the largest R2 values and are very similar in magnitude and structure. The R2 values for the predictive run are consistently smaller than the other runs by about 20%–30%. These results demonstrate that using the latent heat retrievals directly for forcing can account for a significantly larger percentage of the variability in the observations relative to using the model microphysical scheme in a predictive mode even when highly calibrated to the retrievals.
To further demonstrate the performance of each simulation, horizontal cross sections of tangential wind speed at 1-km height are shown in Figs. 8 and 9 alongside the Doppler radar analyses of tangential wind speed. Figure 8 (composite centered at 2225 UTC 2 August 1997 corresponds to 3.5 h into the simulations) shows that the tangential wind speed is most accurate for the retrieval run (Fig. 8a), with the absolute value of the errors defined in Eq. (7) averaged over the cross section of only 14% compared to 43% for the saturated run and 24% for the predictive run. The largest errors for the retrieval run occur near the eye–eyewall interface, where small changes in the sharp gradients of the wind field can cause large deviations from the observations. The saturated run (Fig. 8b) is clearly much too strong everywhere, with errors greater than 50% across large portions of the storm. For the predictive run (Fig. 8c), the tangential winds at this time and level are generally too strong in the outer regions of the domain by about 10%–20% and more significantly, the winds in the inner portion of the domain are too weak by about 10% on the lower end and upward of 50% on the higher end.
The structure of the eye and eyewall (placement of features, variance) are also generally more accurate in the retrieval run, although the saturated run is quite similar with the exception of larger magnitudes everywhere, which results in a smaller “eye” (e.g., tangential wind speeds less than 20 m s−1). The predictive run attempts to capture the observed asymmetry in the northern quadrant of the eyewall, but instead it incorrectly places this feature on the eastern side with an erroneous elongated structure. In addition, the low tangential wind speed region of the eye in the predictive run is too spread out, which results in an eyewall displaced radially outward. This is the reason for the tangential wind speeds being too strong in the outer regions of the domain and too weak in the inner regions.
Figure 9 shows the same plots as in Fig. 8, but at 0004 UTC 3 August 1997 for the radar observational analysis, which corresponds to 5.25 h into the simulations. The retrieval run (Fig. 9a) is again the closest to observations in terms of tangential wind speed magnitude with the absolute values of the relative errors averaged over the cross section of only 13% compared to 38% in the saturated run and 24% in the predictive run. The retrieval run does have a dipole in errors centered in the eye of the storm, which is largely due to small offsets in the circulation centers as well as the orientation of tangential wind asymmetries in the eyewall. The saturated run (Fig. 9b) continues to be overly strong, which is due to too much net heat released in the latent heat retrieval. The magnitudes of the tangential winds in the predictive run (Fig. 9c) are again too strong in the outer regions and too weak in the inner regions of the domain. The low tangential wind speed eye region continues to be too large and spread out, causing a significant outward radial displacement of the eyewall. A small error dipole region is evident in the center of Fig. 9c as well.
Figure 10 shows time-averaged (over the last nine composite periods) vertical cross sections (at y = 0) of tangential wind speed from each model simulation and the radar observational analysis. The observations in Fig. 10d reveal an asymmetric structure with the eastern eyewall having stronger tangential wind speeds and a larger tilt with height than the western eyewall. This structure is reproduced fairly well by the retrieval run shown in Fig. 10a in terms of the magnitude and structure of the tangential wind speed, although the radius of maximum winds (RMW) is displaced about 5–10 km radially outward from the observations. The absolute errors averaged in time and over the cross section are again the lowest for the retrieval run (42%) with the saturated run (51%) and predictive run (55%) following behind. The structure of the errors for the retrieval run show dipoles along the sharp wind gradients of the eyewall and positive/negative regions at upper levels (above about 7 km) of the storm outside a radius of about 20 km. The saturated run (Fig. 10b) produces an eyewall vertical structure (slope) that agrees reasonably well with the observations, but the wind speeds are again too strong everywhere, shown by the contoured error values. Interestingly, the RMW is reproduced very well in the saturated run, but the extra heat released due to the saturation assumption in the latent heat retrieval has expanded the wind field inside the RMW too far, which creates a very narrow eye (tangential wind speeds less than 20 m s−1) region.
The predictive run (Fig. 10c) is able to reproduce the observed eyewall asymmetry. However, the large eye has displaced the time-averaged eyewall radially outward, which results in positive tangential wind speed errors (~20%) outside of approximately 40-km radius and negative errors (~20%–30%) inside this radius with the exception of upper levels. The predictive run produced tangential wind speeds that are generally too strong above approximately 7-km height. This structure explains why the volume-averaged tangential wind speed errors for the predictive run (Fig. 6) have a positive bias on par with the saturated run. What is driving the predictive run to produce a positive bias in the tangential wind speed errors, and why do the retrieval and saturated runs explain more of the variance in the radar observational analysis? To begin to answer these questions, the latent heating fields from the observations and predictive simulation were analyzed.
Figure 11 shows histograms of the latent cooling and heating rates from the latent heat retrievals used in the retrieval run (Figs. 11a and 11b) and the microphysics scheme used in the predictive simulation (Figs. 11c and 11d). The data from the predictive simulation are taken from the radar domain (120 km in the horizontal and 1–10 km in height). Just like the wind speed comparisons, the latent heat histograms show data from the last nine composite times (covers about 5.25 h of the predictive simulation) since the first composite was used for Guillermo’s initial conditions. However, unlike the wind speed comparisons, the heating data from the predictive simulation are taken from snapshots corresponding to the centers of the Doppler radar composites. In this case, showing snapshots (rather than averages) from the model is more revealing for how the model is operating relative to the retrievals. To enable meaningful visual comparisons, latent cooling/heating rates less than 0.25 K h−1 are not displayed in Fig. 11, but the mean values displayed above each set of data include the full range of heating and cooling.
Figure 11c reveals that the predictive run produced a massive amount of small cooling rates relative to the latent heat retrievals (Fig. 11a), which is likely due to two main things: (i) spurious evaporation near cloud edges as a result of numerical errors (Reisner and Jeffery 2009) and (ii) improper representation of small-scale and weak downdrafts in the Doppler radar analysis procedure [see Fig. 15 and discussion in Guimond et al. (2011)]. The latent heating histogram comparisons between the retrievals (Fig. 11b) and the predictive run (Fig. 11d) are reasonably good, but there is too much heating being released in the predictive run with a bias toward very large heating values (>150 K h−1). The bias toward very large heating values allows a compensation for the massive small cooling rates, yielding a mean value of about 18 K h−1 for the predictive run relative to about 14 K h−1 for the latent heat retrievals.
The larger mean heating is the major reason why the predictive run produced a significant positive bias in the volume-averaged tangential wind speed errors (Fig. 6). Interestingly, the mean heating for the saturated run is only about 14.5 K h−1 (saturation assumption increases heating, but also some cooling as well) but the wind speed errors in Fig. 6 are on par with the predictive run, which has a higher mean heating rate (~18 K h−1). Differences in the turbulent dissipation formulations (see section 2) between the saturated and predictive runs could explain this discrepancy.
Figure 12 shows snapshots of latent heating/cooling rates averaged over low to midlevels (1–5-km height) from the retrievals (Fig. 12a) and the predictive run (Fig. 12b). Note that for Fig. 12b, the heating in the predictive simulation is averaged over a 20-min period to be somewhat consistent with the radar composite. Also note that no radar reflectivity mask was applied to the model data to allow a full visual comparison of the simulated fields relative to the retrievals. Regions outside of radar reflectivity were assigned zero values in the latent heat retrievals (Fig. 12a).
The retrievals in Fig. 12a (2225 UTC 2 August 1997) show that the latent heating and cooling are clustered in convective cells scattered around the vortex, revealing a highly asymmetric structure. These cells are coarse-grained representations of reality due to the relatively infrequent along-track sampling of the P-3 tail Doppler radar and the smoothing in the wind analysis (Reasor et al. 2009; Guimond et al. 2011). The predictive run in Fig. 12b (3.5 h into simulation) produced a latent heating (and cooling) field that has more banding than the retrievals with only a few convective cells embedded within the bands. In addition, the heating is occurring farther from the storm center toward the edges of the radar domain in the predictive run, which leaves a large area where small cooling rates can dominate (Fig. 11c). The magnitude of the peak heating in the predictive run is lower than in the retrievals. This is due to averaging the model heating over the 20-min composite period, which smoothes out the small-scale, large amplitude peaks seen in the histogram in Fig. 11d.
Figure 13 further clarifies the issues in the predictive run observed in Fig. 12 by comparing the latent heating/cooling rates from the retrievals (2333 UTC 2 August 1997; Fig. 13a) and the predictive run (4.5 h into simulation; Fig. 13b). The predictive run continues to produce banded heating structures, rather than discrete cells, displaced farther from the storm center when compared to observations. As mentioned above, spurious evaporation from numerical errors near cloud edges (such as the eyewall) is likely a major problem in the predictive simulation, which could displace the eyewall heating farther from the storm center and require larger heating rates (see Fig. 11d) to maintain the storm intensity. Snapshots of the heating fields from the predictive simulation, instead of averaged ones, also tend to show long bands with some embedded convective cells (not shown). The overemphasis on banded heating structures may be due to a combination of insufficient resolution and diffusion inherent in the advection scheme.
Figure 14 shows the azimuthal and time mean structure of the latent heating/cooling in the retrievals (Fig. 14a) and predictive run (Fig. 14b). Note that in these figures, zero values were removed from the data before the averaging was performed. As observed in the horizontal snapshots shown in Figs. 12 and 13, the mean structure shows that the radius of peak heating in the predictive run is nearly twice that of the retrievals (~25 km in Fig. 14a and ~50 km in Fig. 14b), with a large region of small cooling values inside of the 25-km radius (Fig. 14b). The retrievals reveal mean eyewall heating that tilts outward with height with maximum values centered around 4 km altitude. The predictive run does not reveal a coherent eyewall structure in height and the bulk of the time mean heating occurs in the upper troposphere, peaking near 8 km altitude, which is responsible for the positive tangential wind speed bias at upper levels shown in Fig. 10c.
The retrieval and saturated runs explain a large percentage (~60%–70%; see Fig. 7) of the variance in the Doppler radar wind analyses because the forcing is derived from both measurements of cellular convective structures and static (fixed to specific locations around the vortex, which makes these simulations less sensitive to other physical parameterizations, such as turbulence). As a result, the latent heat retrievals are able to generate, to some degree, the observed asymmetries in the right place and at the right time. The predictive run has lower explained wind variance values mainly because of the wide eye (tangential wind speeds < 20 m s−1) region that displaces the eyewall and associated gradients in the wind field radially outward a significant amount. A detailed analysis of the causes for this structure is beyond the scope of this paper, but they are ultimately due to the forcing in the predictive run being dynamic (constantly evolving and coupled to various processes such as turbulence).
4. Conclusions and discussion
The ability of the latent heat retrievals, presented in Part I of this study, to reproduce the observationally analyzed wind fields of rapidly intensifying Hurricane Guillermo (1997) was tested using realistic numerical simulations of the storm at a resolution consistent with the Doppler radar analyses (2 km). The novel findings from these experiments are as follows:
Using the latent heat retrievals as four-dimensional forcing (“retrieval” run) produced very low intensity and structure errors (in terms of minimum surface pressure, tangential wind speed errors, and explained wind variance) and significantly improved simulations relative to a “predictive” run that is highly calibrated to the latent heat retrievals by using an EnKF procedure to estimate values of key model parameters (Godinez et al. 2012).
A reasonably accurate determination of the saturation state in the latent heat retrievals (in a manner similar to that presented in Part I) is necessary to ensure accurate intensity and structure simulations of Hurricane Guillermo (1997). Simulations with the retrievals where saturation was assumed for the entire inner core of Guillermo (releasing all heating/cooling) produced tangential wind fields and minimum surface pressures that were much too strong when compared to observations.
Although the retrieval run produced relatively accurate results, the minimum pressure trace was a bit too strong, which indicates that the latent heat retrievals may be releasing too much heat (due to errors in both the computation of saturation and the retrieved vertical velocity, which had a small positive bias; Reasor et al. 2009; Guimond et al. 2011), although the tangential wind errors had a small negative bias. Both simulations with the latent heat retrievals (retrieval and saturated runs) produced excellent structure predictions of Guillermo, which were evaluated with the radar observational analyses through explained wind variance statistics and eye/eyewall qualitative and quantitative comparisons. The heating retrievals accomplish high-quality structure statistics by forcing asymmetries in the wind field to occur with the correct amplitude and in the generally correct place and time.
Despite the extensive work using the latent heat retrievals to estimate key model parameters with an EnKF procedure (Godinez et al. 2012), the predictive run produced significant tangential wind speed and structure errors although the minimum pressure was reasonable. The positive tangential wind speed bias was shown to be largely a result of too much integrated heating released in the domain (especially in the upper troposphere) with a predisposition toward large heating values. These large heating values may have been trying to offset the massive amount of small cooling rates produced by the predictive run, which is probably a result of spurious evaporation due to numerical errors at cloud boundaries, such as the eyewall (Reisner and Jeffery 2009), although weak downdrafts are not represented well by the Doppler analysis. Furthermore, the spurious evaporation is likely the major contributor to the wide eye region observed in the predictive run, which resulted in an outward displacement of the radius of maximum winds and peak heating. The poor structure statistics (explained wind speed variance) produced by the predictive run were found to be the result of the outward expansion of the eyewall and associated gradients in the wind field. In addition, the latent heating in the predictive run was formed into bands stretched azimuthally around the vortex, whereas the Doppler radar retrievals revealed a cellular structure to the heating.
The clear advantage of the latent heat retrievals in terms of producing a precise intensity and structure simulation of Guillermo highlights their relative accuracy, value for convection initialization in numerical models, and the need for continuous observation of convective events in the hurricane inner core. However, more case studies are needed to determine the robustness of these results. While we believe that airborne Doppler radars provide the highest-quality observations of convection, infrequent sampling of storm cores and relatively poor time continuity of the measurements limit the use of these data from an operational perspective. The use of passive lightning imagers with their large field of views and continuous mapping of 2D and 3D electrical discharges occurring within deep convection may prove useful for improving simulations of hurricane structure. Computing heating rates from lightning measurements will likely be difficult, but placement of deep convection in near–real time appears to be a very attainable goal.
Finally, future work needs to be done on improving the latent heating algorithm and retrievals presented in Part I as well as evaluating their utility in a numerical modeling framework for various storms, models, and data assimilation procedures. The use of a new airborne Doppler radar developed at NASA’s Goddard Space Flight Center called the High-Altitude Imaging Wind and Rain Airborne Profiler (HIWRAP; Heymsfield et al. 2007) will help improve the latent heat retrievals in several ways. The simulations conducted in this paper were for short time periods (~6 h) and work needs to be done for longer periods when model errors can grow quickly. A second day of airborne Doppler radar data for Hurricane Guillermo (1997) exists, which would be useful for these purposes.
The first author would like to thank Dr. Paul Reasor and Dr. Matt Eastin for providing their Doppler radar analysis of Guillermo. Thanks go to Dr. Mark Bourassa and Dr. Robert Hart for several discussions on the work and for their useful suggestions. Dr. Humberto Godinez is acknowledged for providing the parameter estimates and output from his EnKF work. Dr. Jason Sippel and Dr. Gerald Heymsfield also provided some helpful comments. In addition, we thank all of the reviewers for their constructive criticism. This research was supported by the Los Alamos National Laboratory through a project entitled “Flash Before the Storm: Predicting Hurricane Intensification Using LANL Lightning Data” with Dr. Chris Jeffery, the PI. Financial support was also provided by a NASA ocean vector winds contract and a NOAA grant to Dr. Mark Bourassa.
Current affiliation: NASA Goddard Space Flight Center, Greenbelt, Maryland.
The words composite and analysis will be used interchangeably when discussing the Doppler radar wind/reflectivity product.