The Hurricane Research Division (HRD) Real-time Hurricane Wind Analysis System (H*Wind) is a software application used by NOAA’s HRD to create a gridded tropical cyclone wind analysis based on a wide range of observations. These analyses are used in both forecasting and research applications. Although mean bias and RMS errors are listed, H*Wind lacks robust uncertainty information that considers the contributions of random observation errors, relative biases between observation types, temporal drift resulting from combining nonsimultaneous measurements into a single analysis, and smoothing and interpolation errors introduced by the H*Wind analysis. This investigation seeks to estimate the total contributions of these sources, and thereby provide an overall uncertainty estimate for the H*Wind product.
A series of statistical analyses show that in general, the total uncertainty in the H*Wind product in hurricanes is approximately 6% near the storm center, increasing to nearly 13% near the tropical storm force wind radius. The H*Wind analysis algorithm is found to introduce a positive bias to the wind speeds near the storm center, where the analyzed wind speeds are enhanced to match the highest observations. In addition, spectral analyses are performed to ensure that the filter wavelength of the final analysis product matches user specifications. With increased knowledge of bias and uncertainty sources and their effects, researchers will have a better understanding of the uncertainty in the H*Wind product and can then judge the suitability of H*Wind for various research applications.
Over the past several years, vast improvements in observations, computing power, and objective analysis techniques (Powell et al. 1998) have enabled scientists to generate gridded analyses of tropical cyclone wind fields. These analyses have several practical uses in both forecasting and research applications. Although they are designed to be research products, near-real-time analyses can serve as guidance to help forecasters evaluate the present intensity and structure of a tropical cyclone. Forecasters can then use these parameters to predict the evolution and primary threats of an approaching system (Powell and Houston 1996; Morey et al. 2006). This information can then be passed to emergency managers, who can make timely evacuation orders and thus minimize the loss of life. Researchers can use gridded products for poststorm reanalysis (Powell et al. 2010), analysis of storm structure (Vickery and Wadhera 2008), and modeling applications (Morey et al. 2006; Halliwell et al. 2008; Xiao et al. 2009).
The National Oceanic and Atmospheric Administration (NOAA)/Hurricane Research Division’s (HRD’s) Real-time Hurricane Wind Analysis System (H*Wind; Powell et al. 1998, 2010) is a gridded analysis product that is available for both current and historical cases. The H*Wind software produces a gridded analysis by interpolating and smoothing wind speed observations from multiple platforms. Basic error information is already generated by H*Wind, including bias and RMS error of the analysis compared to all observations with winds of tropical storm force or greater; however, no extensive uncertainty analysis has been performed on the H*Wind analysis product, and users can benefit from additional uncertainty information.
This study investigates the sources of uncertainty of H*Wind hurricane wind field analyses and seeks to determine their impacts on the gridded product. These sources can be categorized into two basic groups. The first group consists of observation uncertainty, including random data errors and relative biases between different data platforms. The contributions of these errors are determined by calculating the biases and variances of each type of observation data relative to the H*Wind analysis. The second group contains errors introduced by the data assimilation techniques in H*Wind, including height adjustment errors, averaging time conversions, smoothing and interpolation errors, and systematic biases. A spectral analysis is performed to determine the effects of smoothing, and systematic biases are detected by comparing the H*Wind analyses with the individual observations. With an understanding of these error sources, researchers and forecasters will have better uncertainty estimates for hurricane wind fields, and improvements can be made to the H*Wind algorithm to correct systematic errors.
2. Overview of H*Wind
The H*Wind application contains an extensive database of tropical cyclone wind observations from the year 2000 onward. This database contains observations from a vast array of marine, land, aircraft, and satellite platforms. The user selects a storm and a time frame (generally 4–6 h, depending on the availability of data) to perform the analysis, and H*Wind retrieves the corresponding observations from its database. The winds from each observation platform are automatically adjusted to a 1-min average at a 10-m height based on an open marine exposure as described below; however, analysts can also make manual adjustments or alter the adjustment schemes if necessary.
a. Data types and adjustments
Data from the NOAA stepped-frequency microwave radiometer (SFMR) are considered to be among the most accurate marine wind observations (Powell et al. 2010) and are therefore highly weighted in H*Wind analyses when available. At the time of this study, two NOAA aircraft, NOAA-42 and NOAA-43, were equipped with SFMR instrumentation, which measures microwave emission from the ocean surface to determine the wind speed. Recently, the U.S. Air Force Reserve Hurricane Hunter aircraft also obtained SFMR instrumentation, which helps to further increase the coverage of SFMR data. Adjustments of the SFMR data are based on a calibration that was performed in 2004 and 2005 (Uhlhorn and Black 2003; Uhlhorn et al. 2007).
In the absence of SFMR data, surface-adjusted flight-level winds are preferred because of their high density and reasonably low uncertainty. These observations are adjusted to a 10-m height using the SFMR method outlined by Powell et al. (2009). This method derives a relationship between the fight-level winds and surface winds based on comparisons of flight-level and SFMR data from 179 radial flight legs. The adjusted flight-level winds are compared to nearby surface observations and manual alterations are made to the adjustment scheme if large discrepancies are found.
Observations from GPS dropsondes, ships, and buoys are limited in quantity, but they all provide useful surface data for H*Wind analyses when available. Each dropsonde’s data are adjusted to a 1-min average, 10-m wind by calculating the mean wind speed in the lowest 150 m of the boundary layer, and adjusting to the surface using a composite of dropsonde-based eyewall wind profiles developed by Franklin et al. (2003). All other marine-based data are adjusted to a standard height of 10 m using a surface layer model (Liu et al. 1979), with input winds assumed to represent a 10-min mean wind. The maximum 1-min wind is then obtained by multiplying by a gust factor equal to 1.094 (Powell et al. 2010).
Satellite observations are available from tracking low-level cloud movements in Geostationary Operational Environmental Satellite (GOES) visible imagery and, prior to 2009, from Quick Scatterometer’s (QuikSCAT’s) SeaWinds scatterometer. These observations provide high-density data near the outer fringes of tropical cyclones, where observations are otherwise quite sparse. Closer to the center of tropical cyclones, the QuikSCAT observations are less useful because of serious contamination from clouds and rain. QuikSCAT contains an internal rain-flagging algorithm that highlights these severely contaminated observations so they can easily be removed from the H*Wind analysis. Unfortunately, this algorithm often has problems in high wind environments (the backscatter from heavy rain is in some ways similar to the signal from high winds), resulting in the flagging of uncontaminated observations and the failure to flag observations with minor contamination (Brennan et al. 2009). Subjective quality control is therefore required to ensure that the QuikSCAT observations are in reasonable agreement with nearby wind speeds from other platforms. New QuikSCAT retrieval algorithms (Stiles and Dunbar 2010) will reduce these problems. The standardization processes for the satellite winds are described by Powell et al. (1996) and Dunion et al. (2002).
The primary land-based observation platforms used in H*Wind analyses include reports from aviation routine weather report (METAR) and Automated Surface Observing System (ASOS) stations. Wind speeds observed from these platforms are heavily influenced by local and upstream terrain features. Roughness parameters are therefore estimated for each station, and the observations are standardized to a marine exposure as described by Powell et al. (2010).
b. Generating the analyses
Once all of the data are obtained and adjusted to a common framework, H*Wind plots each observation in a storm-relative reference frame. This procedure requires knowledge of the storm track during the analysis period. The H*Wind database contains the storm track databased on best-track archives and various reconnaissance fixes, although it is frequently necessary to make adjustments and interpolations. For example, the best-track coordinates are rounded to the nearest tenth of a degree, often resulting in unrealistic kinks in the track that must be removed. Once all of the observations are plotted in a storm-relative reference frame (Fig. 1), the user selects which data types to include in the analysis. Generally, most observation platforms are included, although sometimes less reliable platforms are excluded if data from other platforms are located nearby. The user also inspects the observations for nonrepresentative data. If the surface-adjusted flight-level data do not agree with nearby observations from other platforms, changes are made to the adjustment scheme. In addition, any observations that have unrealistic variations in wind speed or direction in relation to the surrounding observations are removed from the analysis. A set of user guidelines is available to analysts in order to assist with these quality control steps.
The H*Wind system produces a gridded analysis by smoothing and interpolating the available wind observations along with a series of interpolated bogus points using a cubic B spline interpolation scheme that minimizes the least squares difference between the observations and the analysis (Powell and Houston 1996). This scheme consists of five superimposed meshes, each with a unique filter wavelength. The filter wavelength is smallest in the innermost mesh, allowing for the finest-scale resolution. In the outer meshes, the filter wavelength is considerably larger because the radial gradients are weaker there (the horizontal scale of motion has much less variability than locations closer to the storm center), and the observation density is usually smaller than that near the storm center. Each observation and bogus point is given a default weight between zero and one, with the most accurate and dense data platforms receiving the largest weights (Franklin et al. 1993). The sizes and filter wavelengths of the meshes, the weights of each observation type, and the distribution of bogus points can be modified by the analyst if necessary. Finally, H*Wind enhances the wind speeds near the eyewall to ensure that the peak observed wind speed is represented in the analysis. This enhancement is azimuthally symmetric as a percentage change in wind speed, and it decreases exponentially with distance from the eyewall so that the outer wind radii are not overestimated. The resulting H*Wind analysis is shown in Fig. 2. In addition to the contour plot and wind radii, Fig. 2 also shows the location and magnitude of the maximum observed and analyzed wind speeds, as well as the mean bias and RMS difference between the analysis and the observations.
3. Data and methods
The first analysis performed is a comparison of the various observation types with each other and with the H*Wind analyses to determine the amount of bias and uncertainty in the various data types. First, a series of H*Wind analyses is generated using the H*Wind program. After being quality controlled, the analysis and observation wind speeds are binned by radius so that the bias and uncertainty can be calculated in each bin. Because the individual storm analyses do not have enough data to obtain statistically significant results for each individual data type, the analyses are combined into a larger dataset, which requires scaling the wind speeds and radial position to nondimensional numbers to avoid adding unwanted variability.
The H*Wind program is used to generate a set consisting of 80 analyses from Hurricanes Charley and Ivan in 2004; Hurricanes Katrina, Rita, and Wilma in 2005; and Hurricane Felix in 2007. This dataset contains storms with a wide array of sizes, intensities, and eye radii, resulting in a small (the H*Wind archive contains thousands of analyses) but representative sample of H*Wind analyses. Each analysis is created by the same analyst in accordance with the user guidelines, minimizing the impacts of subjective decisions by the analyst. Analyses are created every 6 h (in-use operational H*Wind analyses may contain as few as 4 h of data), while the tropical cyclone maintains at least hurricane intensity, assuming that either flight-level or SFMR data from reconnaissance aircraft are available. Weaker tropical storms are not included in this study because of the large variations in structure that are found in disorganized systems. Each analysis contains a 6-h window of observation data, which provides enough time for reconnaissance aircraft to sample the storm, while attempting to minimize temporal variability within the data. H*Wind assumes that all of the data are representative of the analysis time, which is defined by the center of the 6-h analysis window. The choices of time windows and analysis frequency ensure that no data are used in more than one analysis, which would cause observations to be counted multiple times in this study. Both the analyzed wind field and the 10-m adjusted observations are obtained for each analysis period. It is important to note that the H*Wind analyses used in this study are generated in a postanalysis setting. In a real-time analysis setting, fewer observations are available and the analyst has less time to quality control the observations, which will increase the uncertainty.
Before any statistical evaluations are made, it is necessary to perform several quality control procedures on the data. First, all land-based observations are removed from the dataset. Although H*Wind adjusts land winds to marine exposure, this adds additional variability to the process. In addition, several storm quadrants have large spatial gaps in data coverage. In practice, users can add a background wind field based on an H*Wind analysis from a previous time to help fill in these gaps. We do not use background fields in this study because this would add weight to the earlier analysis on which the background field is based. Storm quadrants that require a background field to fully sample are instead removed from the dataset. Radial profiles from each quadrant of the H*Wind analyses are then plotted alongside the observations to ensure that the analyses are representative of the data. A common issue is that several of the H*Wind analyses are unable to resolve concentric eyewalls with default analysis parameters. H*Wind easily resolves the primary eyewall approximately 25 km from the center as a result of its enhancement scheme at the radius of maximum wind (Rmax), but the secondary peak near 60 km is left unresolved (Fig. 3). Resolving this peak would require a substantially lower filter wavelength, which would attempt to resolve small variations in wind speed, resulting in “bull’s eyes,” especially in the tangential direction. Analyses that display this characteristic are therefore removed from the dataset to avoid adding additional variability. As a result of these quality control measures, the results of this study are representative of well-sampled hurricanes over open water, which are not undergoing eyewall replacements or other major structural changes. Analyses of landfalling systems, undersampled systems, and systems with concentric eyewalls will have additional variability that is beyond the scope of this study.
After all of the necessary quality control procedures are completed, calculations are made to determine the uncertainties and biases of each observation platform relative to the H*WIND analyses. Ideally, a data denial experiment would be performed in which several H*Wind fields would be generated for each analysis time. Each H*Wind field would have one data type excluded, and a comparison of the resulting analyses would yield the bias and uncertainty contributions of each individual data platform. Unfortunately, in the vast majority of cases the data are too sparse for such an approach. Instead, a binning approach is used, which provides estimates of biases (if any) that are present in the observation data. Each storm is divided into 10 radial bins (Fig. 4), which compromises between ensuring that there is a significant number of observations in each bin and keeping the bin sizes small to minimize spatial variability. The first bin contains the observations inside Rmax. This bin contains all of the variability associated with the eye, and is therefore excluded from most of the calculations. Outside of Rmax, the observations are placed into the nine remaining bins, which are based on the relative position of the observations between Rmax and the tropical storm wind radius (Rts). The cutoffs for each bin outside of Rmax are equally spaced and calculated separately for each storm analysis and quadrant based on the values of Rts and Rmax. The four storm quadrants are defined relative to the storm motion vector, but the storm motion is not subtracted from the wind speeds. Each radial bin therefore represents the same fraction of the distance between Rmax and Rts in each storm. Unfortunately, although there are adequate data in each bin to reassess the overall bias between the observations and H*Wind, there are not enough data available to obtain a statistically significant estimate of the biases between each individual observation platform. It is therefore necessary to combine data from multiple storm analyses to create a larger dataset.
4. Compositing the analyses
The lack of sufficient data to obtain statistically significant estimates of the biases and random errors of each observation platform necessitates combining data from multiple storm analyses to increase the size of the dataset. This process could create large amounts of unwanted variability resulting from the different size and intensity characteristics of each individual storm. To minimize this unwanted variability, the wind speeds in each storm are scaled to a nondimensional number by dividing by a parameterized radial wind profile. Various parameterization schemes (Holland 1980; Willoughby et al. 2006) exist to derive radial gradient wind profiles, but these profiles are not readily applicable to the surface level. A neural network approach described below is used instead to derive radial surface wind profiles given a set of very basic storm parameters that are easily retrieved from the wind observations and H*Wind analyses. An important consequence of this technique is that the profiles are based on mathematical fitting alone, and not on any physical constraints.
Figure 5 shows a schematic illustrating how the neural network is trained to derive radial wind profiles. First, the data from one-half of the storm analyses are set aside for the specific purpose of training the network. To create this training dataset, the storm analyses are placed into three categories of intensity and size, with each category containing roughly one-third of the analyses. Storm analyses from each category of intensity and storm size are then manually placed in the training dataset, ensuring that storms with a wide spectrum of sizes and intensities are represented. Next, the observations and H*Wind analyses are used to calculate seven parameters for each quadrant of each analysis in the training dataset. The maximum wind speed and the radius of maximum wind are used to accurately represent the peak of the wind profile, and the maximum radius at which tropical storm force wind speeds occur is used to represent the size of each storm. The remaining four parameters are the wind speeds at 2 and 4 times the radius of the maximum wind and the radius at which the wind speed decays to one-half and three-quarters of the maximum. These four parameters are selected to give an estimate of the radial decay rate outside of the eyewall.
Once all of the parameters are obtained, they are input into a radial basis neural network. This network, described in detail in the appendix, generates a radial surface wind profile for each set of input parameters. The neural network profiles are only defined at 50 equally spaced points between Rmax and Rts, and at two additional points inside Rmax, so a cubic spline interpolation is used to obtain profile speeds in between these points. Each of the neural network–derived radial profiles is plotted alongside the observations and the H*Wind profile from the respective storm to ensure that the neural network–derived profiles provide a reasonable fit. Most of the neural network profiles fit the data extremely well, but two of the profiles are well outside the spread of the observations and have unrealistic wind maxima in locations where one does not exist (not shown). These suspect profiles are from Hurricanes Rita and Felix, just a few hours before the respective storms made landfall. It is possible that the nearby land was beginning to impact these systems, and therefore the data from these two analyses are removed from the dataset. The remaining observations and H*Wind analysis grid points are scaled by dividing the observed wind speeds by wind speeds of the neural network at the corresponding radius. The resulting nondimensional winds are used to estimate the biases and variability in the data.
After the wind speeds are scaled, the effectiveness of the neural network technique as a scaling tool is analyzed by determining how much variability is added to the dataset when the storm analyses are combined. The standard deviation of the scaled wind speeds in each bin is calculated separately for each storm and compared to the standard deviation when all of the analyses were combined. The standard deviation of the combined dataset is up to 15% higher than the mean of the standard deviations of the individual storm analyses (Fig. 6). This increase represents the interstorm variability that the scaling procedure failed to remove from the combined dataset.
5. Evaluation of biases and random errors
After the wind speeds from each observation are scaled to a nondimensional number and the data from each analysis are combined into a single dataset, the mean and standard deviation are calculated for each observation platform in each radial bin. A significant difference in the means between two or more observation platforms within any given bin signifies a relative bias between those platforms. The standard deviation of each data type represents the total variability for that platform and bin. This variability represents the sum of the random observation errors and temporal variations within the analysis windows, which contribute to the uncertainty in the H*Wind analyses, as well as spatial variations caused by binning the data and the interstorm variability introduced by combining data from multiple analyses. Because the latter two sources of variability do not contribute to the uncertainty of H*Wind, the total uncertainty in H*Wind is slightly lower than the values reported in this analysis.
Figure 7 shows the mean and standard error of the scaled wind speeds for five observation platforms, as well as the H*Wind analysis grid points as a function of radius. These five platforms include surface-adjusted air force reconnaissance data (AFRC), SFMR data from the two NOAA aircraft, GPS dropsonde observations (GPSSonde), and SeaWinds data from QuikSCAT. These observation platforms are selected because they are all widely available in a vast majority of the cases used in this study. Other marine-based platforms (e.g., buoys and ships) are far too sparse to obtain statistically significant results, and other satellite data [e.g., Advanced Scatterometer (ASCAT) and GOES cloud drift winds] have the vast majority of their data located outside of Rts. A mean of one in Fig. 7 signifies a wind speed equal to the interpolated neural network profile wind speed. Because H*Wind is used to train the neural network, the mean of the scaled H*Wind speeds remains close to one; however, the mean scaled wind speed of most of the observation platforms near the eyewall is approximately 0.9, indicating that the H*Wind grid points have a positive bias of approximately 10% in this area. This large bias near the center is caused by H*Wind’s enhancement algorithm, which enhances the H*Wind profile to equal the maximum observed wind speed in the eyewall, and not the mean. Away from the center, there is still an overall slight positive bias, but this is reflected accurately by the mean bias statistic reported by the H*Wind product.
Very little bias is found between the observation platforms near the storm center, indicating that the data are in strong agreement where they are most important operationally. Approximately one-third of the way between the Rmax and Rts, the biases between the observation types begin to increase substantially, adding uncertainty to the H*Wind analyses. QuikSCAT unsurprisingly experiences problems with rain contamination. Near the storm center, where the rain is heaviest, most of the observations are properly removed by the QuikSCAT rain-flagging algorithm. The algorithm is far less effective in the middle bins, where the rain is more scattered. Numerous rain-contaminated observations in the middle bins are therefore missed by the algorithm, resulting in wind speeds biased 10%–20% higher than the other data platforms. This bias decreases on the far outside of the storm as the rain rate decreases. Also, there is a significant bias between the SFMR measurements from the NOAA-42 and NOAA-43 aircraft in the outermost bins, although this bias disappears near Rmax. These aircraft were used in the Hurricane Rainband and Intensity Change Experiment (RAINEX) in Hurricanes Katrina and Rita in 2005 (Houze et al. 2006). The RAINEX flight paths specifically targeted the eyewall and outer rainbands. The corresponding data therefore represent the conditions in the eyewall and rainbands extremely well, but does not account for the conditions outside of these features. Finally, the GPS dropsonde observations generally show a small negative bias outside of the eyewall; however, the density of dropsonde observations in this region is extremely small, resulting in large standard errors, which render this bias statistically insignificant.
The standard deviations of the scaled wind speeds for each data type (Fig. 8) provide a measure of general uncertainty and range between 0.07 and 0.15, or 7%–15%. The smallest variability is generally found near the storm center, again indicating that the observations are most reliable near the eyewall, where they are crucial in determining the storm intensity. The variability of most data types increases toward the outside of the tropical cyclones. This increase is caused by the larger distance between observations in this region, as well as the larger area covered by the binning scheme. An exception to this trend is the H*Wind analysis grid points in which the standard deviation begins to decrease approximately halfway between Rmax and Rts. This decreased variability is caused by heavy smoothing of the H*Wind field in this region, where small-scale variations are less important in determining the extent of the wind field. The effects of smoothing are discussed in detail in section 6. Of the remaining observation platforms, QuikSCAT has the least amount of variability near Rts, indicating that satellite data are valuable toward determining the extent of tropical storm force winds. Finally, the standard deviations of the GPS dropsonde observations are relatively noisy, reflecting the low observation density.
To determine whether any azimuthal trends exist in the uncertainties and biases for each observation type, the above calculations are repeated separately for each of the four quadrants relative to the storm motion. The results from this analysis (not shown) show no consistent differences between the four storm quadrants. Even near Rmax, where H*Wind assumes radial symmetry in its enhancement algorithm, no large biases are found for any given quadrant. It is possible that the act of splitting the data into quadrants has resulted in a shortage of data, which would make it difficult to perceive any significant signal in the results. If there is a substantial bias in a given quadrant, discerning it will require a much larger dataset.
6. Spectral analysis
To determine the effects of smoothing on the H*Wind analyses, several spectral analyses were done on the H*Wind grid points to ensure that the resolution in both the radial and tangential directions matches the prespecified values. Accurate resolution in H*Wind is critical when studying storm structure and in computer modeling applications. Radial spectral density is calculated at varying azimuths to determine whether the resolution of H*Wind had any tangential dependency. Similarly, tangential spectra are analyzed at several different radii to determine the presence of radial dependency. Because the H*Wind grid points are defined on a rectangular grid, some of the spectral analyses necessitate the interpolation of wind speeds at locations in between the grid points. A two-dimensional cubic spline interpolation is used to approximate the wind speeds in these cases.
Radial power density spectra for five H*Wind analyses are plotted in Fig. 9. Generally, the spectra of wind speed observations, when plotted on a log–log scale, have a nearly constant downward slope as wavelength decreases (Freilich and Chelton 1986). A steepening of the slope signifies the presence of smoothing at the corresponding wavelengths, and noise will cause the spectra to level off. The five analyses plotted in Fig. 9 each undergo the most smoothing between 15 and 50 km. This range contains the prespecified filter wavelengths of the two innermost analysis meshes of these analyses (approximately 33 and 43 km). H*Wind is able to resolve variability at wavelengths smaller than 33 km in the immediate vicinity of the eyewall as a result of the enhancement scheme.
There is some variation in the wavelengths at which smoothing occurs in the various analyses, even though each analysis has the same prespecified filter wavelengths. Much of this variation is caused by the observation density in each individual storm. The analyses of well-sampled storms (e.g., Katrina) generally have slightly smaller filter wavelengths than those of poorly sampled storms. Some of this variability in filter wavelengths is also caused by the structural characteristics of each storm, such as the radius of the eyewall. Changing the azimuth of the spectra in Fig. 9 (not shown) has very little effect on the results. The power spectral density is slightly lower at all wavelengths for azimuths other than the four cardinal directions as a result of interpolation between grid points, but this effect is negligible when compared to the spread of the spectral plots for different storm analyses.
Tangential spectra at radii of 50 and 200 km are shown in Figs. 10 and 11, respectively. At a 50-km radius, the preset filter wavelength in H*Wind is approximately 40 km. The spectra in Fig. 10, although jagged, generally show smoothing in the 25–75-km range. Again, there is some variation among the analyses because of the differences in their data coverage, but there is no discernable signal left at wavelengths below 25 km. At a 200-km radius, data are much more sparse, and so the H*Wind filter wavelength is set to approximately 170 km. The spectra in Fig. 11 show that smoothing begins to occur at this wavelength, and the signal deteriorates rapidly thereafter. The magnitudes of the tangential spectra are considerably smaller than that of the radial spectrum. This reflects the fact that the variability of tropical cyclone wind speeds tends to be an order of magnitude larger in the radial direction than in the tangential direction.
Several spikes are present in both the 50- and 200-km tangential spectra as a result of the nonuniform distribution of observations within the storms. Reconnaissance aircraft data, for example, is only found along the flight path. The circular paths that the tangential spectra are taken along cross the reconnaissance aircraft flight path several times, and therefore contain many large fluctuations in observation density. In regions where the observation density is high, such as along a radial flight leg, the analysis resolution is also high; meanwhile much more smoothing takes place in data-sparse regions, resulting in a poor analysis resolution. The presence of spikes in the tangential spectra signifies that availability of observations is a major limiting factor in the resolution of H*Wind analyses; this is in agreement with Franklin et al. (1993).
The observational random errors and biases are estimated for several hurricane wind observation platforms. The total variability within the observations at a particular radius generally ranges from 7% near the eyewall to approximately 15% in the outer parts of the storm. Because up to 15% of this variability is artificially created by combining data from multiple storm analyses and by binning the data, the combined total uncertainty caused by observation errors and temporal variability ranges from 6% near the eyewall to nearly 13% on the outer fringes of the storm. A significant bias is found in the QuikSCAT observations as a result of rain contamination, indicating that the near-real-time QuikSCAT rain-flagging algorithm only captures the seriously contaminated observations. This bias disappears near Rts, which makes high-density scatterometer observations extremely useful for determining the extent of tropical storm wind speeds. This information can help emergency managers make storm preparations and order necessary evacuations.
The H*Wind algorithm also adds some bias to the analyses. A large positive bias relative to the observational mean occurs near the eyewall resulting from the enhancement to the maximum observed wind speed. Although this bias decreases substantially with radius, H*Wind still has a slight positive bias outside of the eyewall. This bias is found to be consistent with the mean bias statistic reported on the H*Wind analysis product. Most of the small-scale variations in the observations are smoothed away by H*Wind, especially near the outer fringes of the storm. Although the preset filter wavelengths in H*Wind can be changed to improve resolution, observation density remains a limiting factor, and generally makes it difficult to analyze for wind anomalies associated with small-scale features, including outer rainbands and multiple eyewalls, without adding large amounts of noise to the analysis product. Extensive experimentation with analysis parameters is required to resolve features of interest that may be present at flight level (but not at the surface) or visible at the surface (but not at flight level; Powell et al. 2009). In addition, observational density is not sufficient to account for smaller-scale temporal variability in the 4–6-h time window. Real-time rapid intensification cases identified by a new reconnaissance aircraft radial flight leg therefore present a challenge for H*Wind, because there are typically not enough data to allow a “rapid refresh” to depict the rapidly changing conditions. Incorporation of airborne Doppler radar measurements may help improve the spatial and temporal observation density, which would allow for increased analysis resolution and shorter analysis time windows, thereby reducing uncertainty in and around these small-scale features.
H*Wind therefore provides an estimate of the intensity and size of tropical cyclones, which can help real-time forecasters and emergency managers make preparations ahead of landfalling hurricanes. The reduced uncertainty of wind speeds near the eyewall is particularly valuable toward assessing peak wind speeds at landfall. For research and modeling applications, this evaluation provides useful information on the uncertainty characteristics of the H*Wind analysis.
We thank Drs. Eric Uhlhorn and Bradley Klotz from NOAA/AOML’s Hurricane Research Division for taking time to talk with us and share their knowledge of common observational biases, as well as Shirley Murillo, Sonia Otero, and Bachir Annane for assistance with H*Wind. We would also like to acknowledge funding support through the NASA OVWST.
The Neural Network Algorithm
The neural network algorithm used for scaling the wind speeds is a part of the MATLAB Neural Network Toolbox. This algorithm consists of several simple operations, called neurons, acting in parallel to create a series of radial wind profiles. The network consists of two layers of neurons acting in series. The first layer is designed to weight each input and is built iteratively, with one neuron being added in each iteration until the sum-squared error between the derived profiles and the corresponding H*Wind profiles falls below 0.5 m s−1. Achieving lower sum-squared error values requires a large number of neurons, which in turn requires a larger amount of data to configure the network. The output of the first layer is then passed to the second layer, which generates the wind profiles.
Each set of parameters is input into the network as an N × 7 matrix, where N is the number of input parameter sets (i.e., the number of storm profiles). Each neuron in the first layer of the neural network contains a 1 × 7 weight vector and a bias constant. The weight vector is set equal to the seven parameters of one of the input storms. The neuron calculates the magnitude of the vector difference between each row of the input matrix and the weight vector; each input storm is therefore assigned a value that represents its dissimilarity from the weight vector. These results are then multiplied by the bias constant, which is chosen so that the network responds adequately to changes in storm parameters. A bias constant that is too high results in the network being unable to handle parameters that are not close to those of a neuron’s weight vector; however, a low bias constant results in the network treating all input parameters similarly. A bias constant of 8.326 × 10−3 is empirically chosen to compromise between these two extremes. The resulting values are then passed through the transfer function
where x is the magnitude of the vector difference between a set of input parameters and the weight vector multiplied by the bias constant. This transfer function gives each storm a weight between zero and one, with the highest weights going to the storms whose parameters most closely match the weight vector. With the chosen bias constant, a vector difference magnitude of 100 yields a transfer function output of 0.5.
After passing through the transfer function, the results in each neuron enter into the second layer of the neural network in the form of a matrix. This layer also takes in a matrix containing profiles of each H*Wind analysis. These profiles consist of 50 evenly spaced points between Rmax and 1.5 times Rts, as well as two points inside Rmax. The second layer performs a least squares linear regression between the output of the first layer and the target H*Wind profiles. Once enough neurons are created for the sum-squared error to fall below 0.5 m s−1 the completed neural network is used to derive radial profiles for the remaining storm analyses.