A Study of the HWRF Analysis and Forecast Impact of Realistically Simulated CYGNSS Observations Assimilated as Scalar Wind Speeds and as VAM Wind Vectors

Bachir Annane NOAA/Atlantic Oceanographic and Meteorological Laboratory, Miami, Florida
Cooperative Institute for Marine and Atmospheric Studies, University of Miami, Miami, Florida

Search for other papers by Bachir Annane in
Current site
Google Scholar
PubMed
Close
,
Brian McNoldy Rosenstiel School of Marine and Atmospheric Science, University of Miami, Miami, Florida

Search for other papers by Brian McNoldy in
Current site
Google Scholar
PubMed
Close
,
S. Mark Leidner Atmospheric and Environmental Research, Lexington, Massachusetts

Search for other papers by S. Mark Leidner in
Current site
Google Scholar
PubMed
Close
,
Ross Hoffman NOAA/Atlantic Oceanographic and Meteorological Laboratory, Miami, Florida
Cooperative Institute for Marine and Atmospheric Studies, University of Miami, Miami, Florida

Search for other papers by Ross Hoffman in
Current site
Google Scholar
PubMed
Close
,
Robert Atlas NOAA/Atlantic Oceanographic and Meteorological Laboratory, Miami, Florida

Search for other papers by Robert Atlas in
Current site
Google Scholar
PubMed
Close
, and
Sharanya J. Majumdar Rosenstiel School of Marine and Atmospheric Science, University of Miami, Miami, Florida

Search for other papers by Sharanya J. Majumdar in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

In preparation for the launch of the NASA Cyclone Global Navigation Satellite System (CYGNSS), a variety of observing system simulation experiments (OSSEs) were conducted to develop, tune, and assess methods of assimilating these novel observations of ocean surface winds. From a highly detailed and realistic hurricane nature run (NR), CYGNSS winds were simulated with error characteristics that are expected to occur in reality. The OSSE system makes use of NOAA’s HWRF Model and GSI data assimilation system in a configuration that was operational in 2012. CYGNSS winds were assimilated as scalar wind speeds and as wind vectors determined by a variational analysis method (VAM). Both forms of wind information had positive impacts on the short-term HWRF forecasts, as shown by key storm and domain metrics. Data assimilation cycle intervals of 1, 3, and 6 h were tested, and the 3-h impacts were consistently best. One-day forecasts from CYGNSS VAM vector winds were the most dynamically consistent with the NR. The OSSEs have a number of limitations; the most noteworthy is that this is a case study, and static background error covariances were used.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Bachir Annane, bachir.annane@noaa.gov

Abstract

In preparation for the launch of the NASA Cyclone Global Navigation Satellite System (CYGNSS), a variety of observing system simulation experiments (OSSEs) were conducted to develop, tune, and assess methods of assimilating these novel observations of ocean surface winds. From a highly detailed and realistic hurricane nature run (NR), CYGNSS winds were simulated with error characteristics that are expected to occur in reality. The OSSE system makes use of NOAA’s HWRF Model and GSI data assimilation system in a configuration that was operational in 2012. CYGNSS winds were assimilated as scalar wind speeds and as wind vectors determined by a variational analysis method (VAM). Both forms of wind information had positive impacts on the short-term HWRF forecasts, as shown by key storm and domain metrics. Data assimilation cycle intervals of 1, 3, and 6 h were tested, and the 3-h impacts were consistently best. One-day forecasts from CYGNSS VAM vector winds were the most dynamically consistent with the NR. The OSSEs have a number of limitations; the most noteworthy is that this is a case study, and static background error covariances were used.

© 2018 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Bachir Annane, bachir.annane@noaa.gov

1. Introduction

Ocean surface wind observations from satellites have been shown to improve the accuracy of numerical weather analyses and forecasts (Atlas et al. 2001; Atlas 1997; Candy et al. 2009; Leidner et al. 2003; Schulz et al. 2007). Accurate surface wind analyses and forecasts are key to estimating the potential damage from storm surge (the deadliest tropical storm hazard; Rappaport et al. 2009; Powell and Reinhold 2007) and wind. However, most current satellite observing systems are unable to provide accurate ocean surface wind speed data in areas of precipitation and generally have limited temporal resolution (e.g., 1–2 overpasses per day). Of all these systems, only L-band sensors, such as those on the Soil Moisture Active Passive (SMAP; Entekhabi et al. 2010) satellite and the NASA Cyclone Global Navigation Satellite System (CYGNSS; Ruf et al. 2016a), can observe winds in the presence of heavy rain, such as occurs in the inner core of a tropical cyclone (TC). CYGNSS is expected to alleviate some of the current deficiencies in temporal and spatial sampling of the surface wind field of tropical cyclones. CYGNSS is also expected to provide improved wind speed observing capabilities to observe the structure and evolution of TCs. This will also improve the accuracy of the wind products that are inputs to storm surge models: for example, the Coastal and Estuarine Storm Tide (CEST) and the Sea, Lake, and Overland Surges from Hurricane (SLOSH) models.

This study focuses on the impact of accurate near-surface wind observations over the ocean on numerical weather prediction (NWP) analyses and forecasts directly. It should be noted that such data also have the potential to indirectly improve NWP by improving the model parameterizations of wind stress and sensible and latent heat fluxes. These processes are critical to air–sea interactions parameterized in global and regional weather forecast models and are key to our understanding of the atmosphere–ocean system. Through assimilation of such wind data, the depiction of the boundary layer can also be improved in weather forecast models (Atlas et al. 1999, 2001).

Improvements in tropical cyclone forecasts over the past few decades have mainly been due to advances in numerical models (Atlas et al. 2015b; Gopalakrishnan et al. 2012; Rappaport et al. 2009; Willoughby et al. 2007). However, forecasting the intensity change of tropical cyclones remains a challenging problem. One reason for the slower improvement in intensity forecasts compared to track forecasts is the lack of frequent sampling of the inner core of the storm (Rogers et al. 2013). Presently, only TC-penetrating aircraft collect measurements in the inner core. These in situ measurements are only collected for about 30% of the lifetimes of tropical cyclones in the Atlantic and even less in the eastern North Pacific (Rappaport et al. 2009). Reconnaissance aircraft (Aberson et al. 2006) such as the NOAA P-3 host the most advanced and accurate instrumentation, including stepped frequency microwave radiometers (SFMRs; Uhlhorn et al. 2007) and global positioning system (GPS) dropwindsondes (Hock and Franklin 1999). With limited dwell time and limited resources (aircraft and dropsondes), the inner cores of even the best-monitored TCs are relative data voids (Uhlhorn and Nolan 2012). A single, well-placed dropwindsonde, properly reduced by empirical methods to 10-m equivalent wind speed, can estimate maximum surface wind speed and hence the TC intensity. However, a fleet of dropwindsondes would be required to map out the complete TC surface wind field to depict the full destructive potential of a storm (Powell and Reinhold 2007).

CYGNSS was designed to address these observational deficiencies. The CYGNSS GPS receivers hosted on eight minisats launched on 17 December 2016 measure reflected ocean surface signals of opportunity (SoO) broadcast by the existing GPS satellites. This bistatic configuration, in which the transmitter and receiver are on different platforms (Fig. 1), contrasts with the monostatic configuration of scatterometers in which the transmitter and receiver are collocated. Using a constellation of eight small satellites at an altitude of 510 km in a single, low-inclination (35°) orbit plane, CYGNSS samples the tropics and subtropics at a nominal spatial resolution of 25 km with improved temporal sampling, compared to polar orbiting satellites. For any given area on Earth between 38°N and 38°S latitude, the spatial and temporal sampling of the ocean surface by CYGNSS constellation is random, since the movements of the GPS and CYGNSS constellations are not coordinated (Ruf et al. 2016b). But the orbits of the CYGNSS constellation generally produce measured reflections over an area the size of a typical tropical cyclone for two 90-min periods each day, separated by about 12 h. An example of simulated, 6-h coverage over the North Atlantic is shown and described below [section 2b(2)].

Fig. 1.
Fig. 1.

Geometry of bistatic radar measurement of GPS-based quasi-specular surface scattering. The GPS direct signal (Transmitter) provides location, timing, and frequency references, while the forward scattered signal received by CYGNSS (Receiver) contains ocean surface information. Image from Claziria and Zavorotny (2015).

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

The goal of the study presented here is to assess the potential utility of CYGNSS observations of ocean surface wind for hurricane analysis and forecasting. How might CYGNSS data be expected to improve or change the analysis and forecasts of tropical cyclones when incorporated into a hurricane analysis and forecast system? What methods work best to extract information from the CYGNSS observations? These questions are examined with an observing system simulation experiment (OSSE) approach (Hoffman and Atlas 2016). The experiments conducted extend the experiments of McNoldy et al. (2017, hereafter M17). Both M17 and the present study conducted OSSEs using NOAA’s Atlantic Oceanographic and Meteorological Laboratory (AOML) hurricane OSSE system that assimilates CYGNSS observations simulated in different ways during the lifetime of one simulated hurricane. In an OSSE, the nature run (NR), or truth, provides both a point of comparison for OSSE experiment results, as well as the source for simulating all observations assimilated. In this study, a pair of self-consistent global and regional NRs is used: the European Centre for Medium-Range Weather Forecasts (ECMWF) model (T511 NR) and an embedded Advanced Research version of Weather Research and Forecasting (WRF-ARW) nested high-resolution (up to 1-km resolution) simulation.

M17 found a positive impact on TC analyses and forecasts of adding CYGNSS observations to a control experiment through a progression of four experiments, which added 1) realistic CYGNSS wind speed observations retrieved at high (12.5 km) resolution and 2) at nominal (25 km) resolution, 3) perfect wind speed observations, and 4) perfect wind vector observations. Both perfect simulated CYGNSS observation datasets (3 and 4) are at the same resolution (12.5 km) and spatial coverage. The noisy, high-resolution winds had the smallest impact because the quality control (QC) procedures rejected much of these data. For this reason, experiments in this study use only nominal-resolution CYGNSS winds. The control experiment and experiment 2 of M17 form the baseline for the new experiments described here and are denoted CTRL6 and CYG6 below. The 25-km-resolution simulated CYGNSS wind speeds of M17 are the basis of all experiments reported here.

Furthermore, in the present study, motivated by the very good results of experiment 4 of M17, direct assimilation of the CYGNSS wind speeds is compared to the assimilation of CYGNSS VAM wind vectors created from the wind speeds, as described in detail by Leidner et al. (2018, hereafter L18). The CYGNSS VAM wind vectors are a result of a variational analysis that combines the CYGNSS wind speeds and a background wind field (L18). In the present case, the background wind fields are 6-h forecasts of the surface wind from a Hurricane Weather Research and Forecasting (HWRF) control experiment described below in section 3. While M17 considered perfect wind vectors, here, the effect of observation errors is propagated from the CYGNSS raw observation to CYGNSS winds through the VAM.

In addition, since TCs evolve and propagate quickly, shorter DA cycle intervals might yield superior results. In the DA system used here, even though observation innovations are evaluated with respect to the background at the time of the observation, these innovations are all combined to influence the model state at the central analysis time. This approximation is most appropriate for short DA cycles. However, every time the model is initialized with observations, there is some adjustment. For TCs, this adjustment can result in substantial increases (spinup) or decreases (spindown) of intensity. Therefore, there are tradeoffs in selecting the optimal DA cycle interval. In this study, DA cycle intervals of 1, 3, and 6 h are tested, whereas M17 used 6-h cycles in all experiments.

The paper is organized as follows. Section 2 describes the OSSE framework. Section 3 presents the experimental design and section 4 the results. Section 5 summarizes the present study with a focus on its findings and its limitations and briefly describes future planned studies.

2. OSSE framework

To conduct realistic OSSEs related to hurricane analyses and forecasts, AOML and the University of Miami developed a new regional OSSE framework (Atlas et al. 2015a,b,c; McNoldy et al. 2017). A schematic of this OSSE framework is illustrated in Fig. 2.

Fig. 2.
Fig. 2.

Basic flowchart of the regional OSSE framework.

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

a. Nature run

The OSSE framework is based on a high-resolution regional nature run (Nolan et al. 2013) called HNR1 that was created by embedding the WRF-ARW Model, version 3.2.1, within a lower-resolution global nature run. HNR1 has an outer fixed domain of 27-km grid spacing, spanning the tropical Atlantic basin, and three telescoping, storm-following, nested domains of 9-, 3- and 1-km grid spacing. Sixty model layers span the vertical domain from the surface to 50 hPa. The boundary conditions are provided by a global nature run produced by the ECMWF (version c31r1) T511 model with 91 vertical levels, here called the T511 NR (Andersson and Masutani 2010). The T511 NR is a free-running forecast from 1200 UTC 1 May 2005 to 1200 UTC 1 June 2006. The period of HNR1 is from 0000 UTC 29 July to 0000 UTC 11 August 2005. The two nature runs have similar storm tracks, but in the regional nature run, the hurricane is simulated with more realistic intensity, scale, and structure and undergoes rapid intensification during the period centered on 4 August 2005.

b. Simulated observations

Within a typical OSSE framework, all observations should be simulated from a relevant nature run, and observation errors appropriate to each observation type should be added. In the hurricane OSSE system, observations are simulated by sampling the observed quantities from the T511 NR for conventional and routinely assimilated satellite data, whereas the CYGNSS observations are simulated with CYGNSS Science Team end-to-end simulator (E2ES; O’Brien 2014) based on the HNR1 winds. Typical errors are added to the simulated conventional and satellite observations, while a wind retrieval error model assigns realistic errors to the simulated CYGNSS wind speed. In addition, vector winds are determined from simulated CYGNSS wind speeds using a 2D variational analysis method (VAM). The VAM analyzes the simulated CYGNSS wind speeds given an a priori, gridded ocean surface wind field. The resulting wind direction and speed in the VAM analysis are assigned at each CYGNSS-retrieved wind location to produce VAM CYGNSS wind vectors. More detailed descriptions of the methods and data sources used to simulate observations in our study are provided below.

1) Conventional and satellite observations

Conventional and satellite observations corresponding to those assimilated in National Centers for Environmental Prediction (NCEP) operations were simulated from the T511 NR described in section 2a. Realistic observation errors by observation type are based on estimates in NCEP’s Gridpoint Statistical Interpolation (GSI) analysis system and added to each simulated observation. The errors are drawn from a zero-mean Gaussian distribution using the OB error estimates as the standard deviation (Errico et al. 2013). Because of the close correspondence between the global T511 NR and the embedded regional WRF-ARW HNR1, the simulated conventional and satellite observations reflect the same synoptic conditions as HNR1 used to simulate the CYGNSS observations, just realized by a global model. All conventional observations of temperature, winds, moisture, and surface pressure, atmospheric motion vectors, and satellite data types that were in operational use in 2012 are simulated from the T511 NR. [See Table 1 of Atlas et al. (2015c) for the detailed list of satellite data sources.]

2) CYGNSS wind speed observations

The NASA CYGNSS Science Team simulated the CYGNSS wind speed observations with the E2ES, which takes orbital ephemeris for the actual GPS and simulated CYGNSS satellites to simulate reflected power from the gridded ocean surface wind fields. The reflected GPS signal power from the central reflecting point, that is, the specular point (SP; in Fig. 1), as well as weaker reflections from a region approximately 100 km around the SP known as the “glistening zone,” are recorded in the measurement space of GPS signal delay and Doppler shift, known as a delay-Doppler map (DDM). DDMs of reflected power (watts) are converted to DDMs of bistatic radar cross section σ0 (m2). The σ0 DDMs are the primary input to the CYGNSS wind speed retrieval algorithm.

For HNR1, E2ES generated specular points at a cadence of 1 Hz, using the highest-available-resolution HNR1 grid (27, 9, 3, or 1 km). Figure 3 shows simulated CYGNSS winds in the 27-km domain at 1500 UTC 3 August 2005. Since the outer three grids are available every 30 min and the innermost domain every 6 min, the maximum time difference between NR outputs is 15 min. As the HNR1 nested grids are storm following, CYGNSS SPs in or near the inner core are simulated using 1-km-resolution HNR1 winds. Specular points farther from the hurricane are simulated using HNR1 grids at lower resolutions, depending on location. Consequently, the highest resolution is utilized in the region of highest wind speeds. Note how the coverage changes as the observation window is shortened from ±3 to ±1.5 to ±0.5 h, corresponding time windows used for 6-, 3-, and 1-hourly data assimilation (DA) cycling. This change in sampling has important consequences for the impact of the assimilated data (explored further in section 4).

Fig. 3.
Fig. 3.

Example of sampling of the North Atlantic by the simulated CYGNSS constellation, ±3 h around 1500 UTC 3 Aug 2005. The locations of simulated CYGNSS data in the 6-h window are plotted as colored dots. The blue and green dots show the locations of subsets of all observations within ±1.5 and ±0.5 h, respectively, of 1500 UTC. CYGNSS observation locations are overlaid on HNR1, 27-km-resolution (d01) 10-m wind speed field, valid at the same time.

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

The HNR1 winds sampled by the E2ES at the CYGNSS locations are “perfect.” E2ES simulates realistic variation in measurement uncertainties with two additive error components due to uncertainty in 1) the calibration of σ0 DDMs and 2) the wind retrieval algorithm, assuming perfectly calibrated σ0 DDMs. The σ0 DDM uncertainty term was determined by the CYGNSS Science Team from a level 1 processing flowdown error budget to have zero mean and a standard deviation of 1.2 m s−1. The wind retrieval error term is more complex, depending on wind speed and range-corrected gain (RCG). In a simulated calibration exercise, the mean and variance of nonnormal Gaussian distributions (generalized normal) were fit to level 2 data in four wind speed ranges and six RCG ranges. Using the wind speed from the HNR1 at the specular point and the RCG calculated from the orbital ephemeris, Gaussian pseudorandom errors are added to the perfect observations (A. O’Brien 2016, personal communication). Observation error computed for the simulated CYGNSS-retrieved winds is a mixture of two Gaussian errors: one normal and another nonnormal, with typical values of 2–4 m s−1, depending on the factors described above.

3) VAM CYGNSS wind vector observations

CYGNSS data do not include wind direction. With alternative GPS receiver hardware or ground processing, direction might be extracted from the reflected signal (e.g., Komjathy et al. 2004). To assess the benefit of adding directional information, a two-dimensional VAM (Hoffman 1982, 1984) is applied following L18 to simulated CYGNSS wind speeds to generate dynamically realistic vector wind field analyses. The VAM has been applied to determine wind direction from among 2–4 wind ambiguities from both NSCAT and QuikSCAT scatterometer missions (Hoffman et al. 2003). The VAM has also been used to generate nearly 30 years of 6-hourly global ocean surface wind analyses, combining all available passive microwave and scatterometer data since 1987 (Atlas et al. 1996, 2011). The VAM uses an a priori, or first guess, gridded surface wind field as a starting point for each analysis. In this study, 6-h forecasts on a 9-km-resolution outer domain from an HWRF regional Control OSSE experiment (CTRL6, described below in section 3) are used as the VAM analysis first-guess fields. This choice of first-guess winds is intended to emulate what might be the best available choice in real-time operational forecasting. A VAM analysis is generated four times a day at 0000, 0600, 1200, and 1800 UTC for the period of the OSSE (a 4-day period described below in section 3). The VAM analysis u- and υ-wind components are interpolated in space and time to the set of simulated CYGNSS wind speed locations assimilated. These derived observations are referred to hereafter as VAM CYGNSS vector winds. The VAM CYGNSS vector wind error is taken to be the simulated CYGNSS observation error determined by the E2ES plus the RMS VAM error compared to observations (i.e., root-mean-square of CYGNSS wind speed minus VAM analysis wind speed) to account for the influence of the VAM analysis on observation error. The VAM analysis cost function balances the fit to observations with a minimum departure from the background, so the RMS VAM error implicitly includes an estimate of background error.

c. Data assimilation and forecast model

Since a global modeling system is heavily parameterized and cannot sufficiently resolve the small scales that are major contributors to the TC rapid intensification processes, a regional model specifically developed for TCs is used in this study. HWRF is used specifically to be consistent with the goals of the Hurricane Forecast Improvement Project (Gall et al. 2013) and because the research version closely parallels the operational version. This approach allows us to assess the impact of new observing systems through improved HWRF initial conditions (ICs) and is a similar setup used by Atlas et al. (2015b) to investigate the potential impact of an Optical Autocovariance Wind Lidar (OAWL) on TC prediction. In our experiments, we use the 2012 version of the operational NCEP HWRF DA system. The HWRF Model parameterizations include the Global Forecast System (GFS) planetary boundary layer scheme, the new simplified Arakawa–Schubert cumulus scheme (only for the parent domain since convection is explicit in the nested domain), the Ferrier microphysics scheme, and the Geophysical Fluid Dynamics Laboratory (GFDL) scheme for shortwave and longwave radiation. This version (v3.5; Tallapragada et al. 2013; Atlas et al. 2015c) is configured in our experiments with a fixed 9-km parent domain and a 3-km nested storm-following domain (cf. Fig. 4). In the HWRF DA system, NCEP’s GSI three-dimensional variational (3DVar) scheme assimilates the observations. QC follows GSI’s practice of gross outlier removal by comparison with background values, and CYGNSS data are treated as ship observations for QC purposes. Data assimilation is performed on the 9-km domain only, with no vortex relocation.

Fig. 4.
Fig. 4.

Configuration of model domains. The 27-km-resolution domain (d01) of HNR1 is shown in blue, and the 9-km (d01) and nested 3-km (d02) OSSE grids are shown in black.

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

3. Experimental design

Nine experiments varying the use of the CYGNSS observations and the frequency of the DA cycling interval are carried out within the OSSE system to assess the simulated impact of CYGNSS observations on hurricane analysis and forecasting. First, a control DA experiment (CTRL) assimilates standard conventional data that are routinely assimilated in the 2012 GFS Data Assimilation System (GDAS), including radiosondes, atmospheric motion vectors, and numerous satellite-based observations [see section 2b(2)], but no CYGNSS observations. This is followed by an experiment where CYGNSS wind speeds are added to the control (CYG) and an experiment where VAM CYGNSS wind vectors are added to the control (VAM). Each of these OSSEs is conducted at three data assimilation frequencies: 6-, 3-, and 1-hourly. (The numeral 6, 3, or 1 is added to the experiment names to denote cycling frequency; see Table 1.) Note that all simulated observations are binned/grouped by time at these frequencies: that is, ±3-, ±1.5-, and ±0.5-h time windows around the DA analysis times, respectively. For convenience, in the text, we will refer to all the CTRL, CYG, and VAM experiments collectively as EXP; CTRL6, CYG6, and VAM6 experiments collectively as EXP6; and similarly for EXP3 and EXP1.

Table 1.

List of experiments; 6, 3, and 1 denote the cycle interval in h.

Table 1.

The nine experiments and the average amount of CYGNSS data assimilated in each DA cycle are listed in rows 3 and 4 of Table 1. Although the total number of observations is the same, these are divided into smaller chunks with increased cycling frequency. Also, the number of assimilated variables doubles for the VAM experiments, since there are two wind components (u and υ wind) for each simulated CYGNSS wind speed in the CYG experiments. All of the experiments are initialized at 0000 UTC 1 August 2005. GFS global control OSSE analyses described by Casey et al. (2016) are used to provide initial and lateral boundary conditions. Cycling is performed through 0000 UTC 5 August for a total of 16, 32, and 96 analyses for experiments with 6-, 3-, and 1-h cycles, respectively. A 5-day HWRF forecast is initialized every 6 h in all experiments. Each experiment is then verified against the HNR1. Forecast initial times before 0600 UTC 2 August are discarded to eliminate the effects of model adjustment to the cold start from the global analysis. Error statistics reported below from these nine OSSE experiments are compared using the final 12 forecasts in the experiment period (i.e., with initialization times every 6 h from 0600 UTC 2 August to 0000 UTC 5 August).

4. OSSE results

The results of the experiments described in the previous section are presented here in three parts: 1) statistical summaries of the errors in TC track and intensity, 2) domain-wide errors, and 3) physical interpretations of the analyses and forecasts of the 10-m wind field.

a. Assessment of TC track and intensity errors

To evaluate and compare the effect of simulated CYGNSS wind speed and VAM CYGNSS wind vectors, tropical cyclone metrics are calculated and compared to HNR1 values (truth). Those metrics are storm center position, minimum sea level pressure (MSLP; hPa) and the maximum wind speed (kt; 1 kt = 0.5144 m s−1; Gall et al. 2013). For each 5-day forecast within a given OSSE experiment (forecasts are started every 6 h at synoptic times), error metrics are computed with respect to the HNR1 every 6 h. Error in all cases is defined as experiment minus the truth (EXP − HNR1). Mean and standard deviation of error are computed from 12 forecasts (N = 12) at each forecast lead time to 96 h. (However, N is reduced for 108- and 120-h forecasts because some of the later verification times move the HNR1 hurricane close to the boundaries of our regional OSSE domain. For this reason, we show results from here forward for 0–96-h forecasts.) Note that we calculate mean error, not mean absolute error. Nevertheless, track errors are always positive. Also, since the HWRF OSSE hurricanes are uniformly less intense than the HNR1 hurricane, all errors are positive (for MSLP) or negative (for maximum wind speed). So, in an absolute sense, the results shown are equal to mean absolute error. Note that while 12 is not a large number of forecasts for assessing statistical significance, and these forecasts are all during the lifetime of a single simulated hurricane, the average performance does provide an indication of the variation of error over the forecast hours and between OSSE experiments.

Figures 5a–c show the hurricane track error for each cycling frequency (6, 3, and 1 hourly). In each panel, the mean and standard deviation of the track errors with respect to the HNR1 are plotted as a function of forecast hour for CTRL, CYG, and VAM experiments. Overall, the track errors among the experiments for any given cycling frequency are quite similar; that is, forecast error growth dominates CYGNSS impact. All OSSE experiments and cycling frequencies produce similar position errors for 1–3-day forecasts (0–72 h), but EXP6 errors are smaller than EXP3 and EXP1 errors for 3–4-day forecasts. An inverse relationship between cycling frequency and observation data coverage means that the 3–4-day track errors are increased for EXP3 and EXP1, compared to EXP6. The large-scale environment is better characterized by the increased data coverage of 6-hourly cycling. Initial position error (forecast hour 0) is smallest for EXP3 (Fig. 5b; ~50 km). Judging from the overlap of one-standard-deviation bounds, analysis errors are likely not statistically significant. The differences in forecast track error statistics by cycling frequency are large enough to explore the statistical significance between different DA cycling intervals (see discussion of Fig. 6 below). For these OSSE experiments based on the HNR1 case, CYGNSS data do not seem to improve or degrade the forecast track, but the differences in track error are sensitive to cycling frequency.

Fig. 5.
Fig. 5.

Average storm forecast errors with light ± standard deviation lines plotted for (a),(d),(g) 6-; (b),(e),(h) 3-; and (c),(f),(i) 1-hourly DA cycling experiments. Mean errors/deviations are colored by OSSE experiment: black/gray for CNTL, red/light red for CYG, and blue/light blue for VAM.

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

Fig. 6.
Fig. 6.

(a) MSLP forecast error and (b) maximum wind speed forecast error of experiments CNTL3 (heavy dashed black) and CNTL1 (solid black) with respect to CNTL6 forecast errors. The 95th-percentile CIs are plotted: two-sided CIs are plotted in transparent gray, and one-sided CIs are plotted with a thin dash–dotted line for CNTL3 and a dotted line for CNTL1. (c),(d) As in (a),(b), but for CYG3 and CYG1 errors with respect to CYG6 forecast errors. (e),(f) As in (a),(b), but for VAM3 and VAM 1 errors with respect to VAM6 forecast errors.

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

Figures 5d–f are similar in presentation to the first row, but for MSLP. As with track error, there are significant differences between experiments using different cycling frequencies. But unlike track error, EXP3 produces the lowest overall MSLP errors. For example, mean MSLP analysis errors (forecast hour 0) are 19–22 hPa for EXP6, 11–13 hPa for EXP3, and 13–17 hPa for EXP1. The standard deviation of MSLP forecast errors tends to decrease for all experiments, indicative of the forecast model consistently spinning up initially weak storms. Notice that unlike track error, MSLP error is sensitive to both cycling frequency and the assimilation of CYGNSS data. For all cycling frequencies, the VAM OSSE experiments have the lowest MSLP errors, compared to CTRL and CYG experiments, over forecast hours 0–48. The positive impact of CYGNSS data evident in these average MSLP error statistics is large enough to explore statistical significance further (see discussion of Fig. 7 below).

Fig. 7.
Fig. 7.

(a) MSLP forecast error and (b) maximum wind speed forecast error of experiments CYG3 (red) and VAM3 (blue) with respect to CNTL3. The 95th-percentile CIs are plotted: two-sided CIs are plotted in transparent colors, and one-sided CIs are plotted with thin dotted lines.

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

Figures 5g–i show the error in maximum wind speed for all OSSE experiments. The monotonic reduction in maximum wind speed error (i.e., less negative) for all experiments during forecast hours 0–48 is another reflection of the forecast model consistently spinning up initially weak storms. The maximum wind speed is closely tied to MSLP through the wind–pressure relationship (Knaff and Zehr 2007). Both metrics reflect hurricane intensity. Like the MSLP errors already discussed, maximum wind speed errors are smallest for EXP3, particularly during the start (out to 48 h) of the forecasts. This indicates that differences in these hurricane error statistics in our OSSE study are primarily due to cycling frequency. However, as with MSLP errors, assimilation of CYGNSS observations reduces maximum wind speed analysis and forecast errors through forecast hours 0–48 for all cycling frequencies by 0–8 kt.

The results in Fig. 5 point to potentially important impacts of assimilating simulated CYGNSS observations on hurricane intensity (i.e., reduced MSLP and maximum wind speed errors) and cycling frequency. To explore this further, the statistical significance of differences in forecast error between OSSE experiments is investigated. First, the influence of cycling frequency is shown in Fig. 6 using the three CTRL OSSE experiments. CTRL3 and CTRL1 experiments are investigated, using CTRL6 errors as a common baseline. Using the CTRL experiments removes the influence of simulated CYGNSS observations from the evaluation of cycling frequency. MSLP error differences (Fig. 6a) show that assimilation every 3 h (CTRL3) improves forecast MSLP by 0–10 hPa during the first 24 h, compared to 6-hourly cycling (CTRL6). To assess significance, the 95th-percentile confidence interval (CI) from a two-sided paired t test is plotted with gray semitransparent shading. The one-sided 95% confidence intervals are also plotted as light dotted or dash–dotted lines. Where the one-sided CI lines are greater than zero, the mean experiment MSLP error is less than CTRL6 error with greater than 95% confidence. Figure 6a shows that the CTRL1 experiment improvements are marginally significant at the 95% confidence level for forecast hours 0–60. But the improvement by assimilating every 3 h is larger than in experiment CTRL1 during forecast hours 0–24 and with 95% significant difference between CTRL6 and CTRL3. The improvement from assimilation at 1- or 3-h intervals after 48 h reduces to near zero for the remainder of the forecast period.

The statistical significance of impacts on forecast maximum wind speed for different cycling frequencies is shown in Fig. 6b. The figure can be interpreted similarly to Fig. 6a and shows results similar to MSLP. Therefore, forecasts of both MSLP and maximum wind speed are most accurate with 3-hourly cycling, and the improvements are statistically significant for at least the first 24 h.

Next, the influence of CYGNSS data on the 3-hourly cycling experiments is shown in Fig. 7. CYG3 and VAM3 experiments are investigated, using CTRL3 errors as a common baseline. Figure 7 shows the difference between CYG3/VAM3 experiment errors and CTRL3 errors (i.e., CTRL3 − EXP3) for MSLP and maximum wind speed. MSLP error differences (Fig. 7a) indicate that assimilation of CYGNSS data (both in scalar and vector form; CYG3 vs VAM3, respectively) improves the forecast MSLP by 2–5 hPa during the first 48 h. To assess significance, the 95th-percentile CI from a two-sided paired t test is plotted with light blue and light orange semitransparent shading. Since the OSSE forecast hurricanes are uniformly less intense than HNR1, a more appropriate hypothesis is that the CYGNSS observations increase the intensity of the analyzed and forecast hurricane. There, the one-sided 95% confidence intervals are also plotted as dotted lines. Where the dotted, one-sided CI lines are greater than zero, the mean experiment MSLP error is less than CTRL3 error with 95% confidence. Figure 7a shows that the VAM3 experiment improvements are marginally significant at the 95% confidence level for forecast hours 0–48. The improvement by assimilating VAM3 vectors is somewhat larger than in experiment CYG3 during forecast hours 0–36 and with 95% significant difference between forecast hours 24 and 36. The improvement from assimilation of VAM CYGNSS vectors after 48 h reduces to near zero for the remainder of the 5-day forecast period, whereas the improvement from the assimilation of CYGNSS wind speed continues in the forecasts until 96 h. But the reduction in error in the CYG3 forecasts between hours 48 and 96 is only statistically significant with 95% confidence at forecast hour 72.

The statistical significance of impacts from assimilating CYGNSS observations on forecast maximum wind speed are shown in Fig. 7b. The figure can be interpreted similarly to Fig. 7a. Because intensity in terms of maximum wind speed has the opposite sense of intensity in terms of MSLP (see above), improvements in CYG3 and VAM3 forecasts with respect to CTRL3 appear as mean error differences less than zero. Therefore, where the one-sided CI lines (dotted lines) are less than zero, the mean experiment maximum wind speed error is less than CTRL3 error with 95% confidence. The average reduction in maximum wind speed error from assimilation of CYGNSS observations is 2–6 kt over forecast hours 0–54. The VAM3 error differences from CTRL3 are significant at the 95% level for forecast hours 0–54.

b. Domain-wide errors

Figure 8 shows the domain-wide error statistics for 10-m wind speeds with respect to the HNR1 10-m winds. Given that the 9-km domain dimensions are 411 × 705 and that there are 12 forecasts, the RMS error (square root of the mean squared vector wind difference) at each 6-h forecast interval is the result of approximately 3.5 million wind speed differences (EXP − HNR1). Notice that the RMS errors are generally quite small, increasing from 1–2 kt in the analyses (forecast hour 0) to 3–4 kt for 5-day forecasts. The standard deviation of those errors also increases from 0.25 to 1 kt. As with the error statistics presented in Fig. 5, the EXP3 have the lowest errors at analysis times. The effect of CYGNSS data can be seen over the first 0–24 forecast hours on a domain-wide basis. The RMS error in the analyzed fields is reduced by small but consistent amounts (0.1–0.25 kt as forecast time increases) by the assimilation of CYGNSS data (i.e., compared to CTRL), with the largest reductions occurring in the 6- and 1-hourly cycling experiments, at least in part, since CTRL6 and CTRL1 errors are larger than CTRL3 error. Improvements similar to those in the 10-m, domain-averaged winds owing to CYGNSS data can be seen in other upper-level fields (e.g., 850-hPa temperature and 500-hPa heights; not shown).

Fig. 8.
Fig. 8.

Large-scale, domain-averaged, 10-m wind errors (RMS; m s−1) for (a) 6-hourly DA cycling, (b) 3-hourly DA cycling, and (c) hourly cycling. Experiments are plotted by color as in Fig. 5.

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

Figure 9 shows the absolute integrated kinetic energy (IKE) differences (errors) between the OSSE experiments and HNR1, arranged by cycling frequency as in Fig. 8. In Fig. 9, IKE is the domain integral of the squared 10-m wind vector, scaled into energy units (Powell and Reinhold 2007). Thus, IKE accumulates the energy of a 2D wind field at a given time to a single, scalar estimate of total energy. The IKE differences by experiment and by DA cycling frequency mirror the results presented in Figs. 5 and 7 for MSLP error, maximum wind error, and domain-wide 10-m wind error. That is, 3-hourly DA cycling produces the lowest IKE error for CTRL, CYG, and VAM experiments, and the assimilation of CYGNSS data, whether wind speed or VAM CYGNSS vector data, reduces the IKE error at all cycling frequencies, which is in agreement with results presented in McNoldy et al. (2016). As seen in the domain-wide errors in Fig. 8, the error reduction from the assimilation of CYGNSS data is largest in the 6- and 1-hourly cycling experiments, at least in part, since CTRL6 and CTRL1 errors are larger than CTRL3 error.

Fig. 9.
Fig. 9.

Absolute IKE error (TJ) as a function of forecast hour for (a) 6-hourly DA cycling, (b) 3-hourly DA cycling, and (c) hourly cycling. Error is the difference between OSSE experiment IKE and NR IKE (HNR1).

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

c. 10-m hurricane wind field

The distribution of surface wind vectors around a hurricane is its dynamic footprint on the ocean surface. It reflects the structure of the low-level wind field and controls interaction with the ocean surface, including storm surge, surface fluxes, the wave field, and ocean mixed layer depth. Next, visualizations of 10-m wind fields from the HNR1 and OSSE experiments illustrate the impact of CYGNSS data.

Given the significance of the improvement in 0–48-h intensity forecasts shown in the previous section, Figs. 10 and 11 illustrate the physical impacts of assimilation of CYGNSS data on 24-h forecasts of the 10-m wind field. For the period and geographic region of our study, the entire hurricane circulation is sampled by CYGNSS during the two 3-hourly DA cycles each day at 1500 and 1800 UTC. Therefore, 5-day forecasts starting at 1800 UTC on any day during the OSSE experiments have the benefit of one or two recent 3-hourly DA cycles with assimilation of CYGNSS data in or near the inner core of the tropical cyclone. So, 24-h forecasts starting at 1800 UTC should show the clearest benefit from assimilation of these data.

Fig. 10.
Fig. 10.

(a) NR 10-m wind speed valid at 1800 UTC 4 Aug and (b)–(d) 24-h forecasts of 10-m wind speed from OSSE experiments CNTL3, CYG3, and VAM3, valid at the same time as (a). The instantaneous wind maximum Vmax is labeled in the lower left in each panel.

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

Fig. 11.
Fig. 11.

As in Fig. 10, but for (a) NR valid time of 1800 UTC 5 Aug and (b)–(d) 24-h OSSE experiment forecasts valid at 1800 UTC 5 Aug.

Citation: Monthly Weather Review 146, 7; 10.1175/MWR-D-17-0240.1

Figure 10 shows 10-m wind speed fields from the HNR1 (9-km domain) and three 24-h forecasts from the 9-km domain of OSSE experiments CTRL3, CYG3, and VAM3, all valid at 1800 UTC 4 August. The fields are instantaneous values and are therefore subject to fluctuation from time step to time step. For example, the maximum wind speed can change location and intensity from model time step to time step. Nevertheless, the pattern of the 10-m wind speed field gives a good overall indication of intensity and shows storm asymmetries. The HNR1 wind speed maximum of 52.8 m s−1 is more closely approximated by CYG3 and VAM3 24-h forecasts (maximum wind speeds of 49.8 and 51.6 m s−1, respectively) than by CTRL3 (47.0 m s−1). Also, the closed annulus of winds greater than 40 m s−1 in HNR1 is most closely approximated by the 24-h CYG3 forecast. Neither the CTRL3 nor VAM3 24-h forecast wind fields have wind speeds greater than 40 m s−1 in all quadrants, as in the HNR1 and CYG3 forecast. Thus, the 24-h forecast wind fields in both experiments that assimilate CYGNSS data (CYG3 and VAM3) are improved, but in different aspects, compared to the CTRL3 forecast.

Figure 11 shows a comparison of OSSE 24-h forecast wind field to the HNR1 wind field, but valid 1800 UTC 5 August, a day later than in Fig. 10. In this comparison, the VAM3 forecast is closest in intensity and structure to the HNR1 wind field. Note that the CYG3 forecast wind field is not as intense or as well structured as the CTRL3 forecast wind field. At other analysis times, assimilation of CYGNSS data when the hurricane is only partially covered can produce asymmetries in the resulting GSI 3DVar analyses in both CYG3 and VAM3 experiments (not shown). The issue of partial sampling of tropical systems by space-based instruments that measure ocean surface winds (passive microwave and scatterometers) has long been a challenge for DA systems. However, in our OSSE experiments, the asymmetries introduced by the assimilation of simulated CYGNSS wind speed (CYG3) are often stronger and more disturbing to the structure of the surface wind field than the assimilation of VAM CYGNSS winds (VAM3). The more complete set of information presented to the DA system as VAM CYGNSS winds is likely the reason that these winds produce consistently better analyses and 0–48-h forecasts, compared to CYG experiments. This is one explanation for the differences between CYG3 and VAM3 0–48-h forecast errors presented in Fig. 7 (section 4a).

5. Summary and conclusions

The potential value of observations to be collected by the NASA Cyclone Global Navigation Satellite System (CYGNSS) for hurricane analysis and forecasting is explored in a simulation study. Since vector winds have more information content than scalar winds, two approaches to assimilating the CYGNSS observations were tested: CYGNSS winds were assimilated as scalar wind speeds and as wind vectors determined by the VAM (as described by L18). Because TCs can evolve rapidly, results from three different DA cycle intervals (1, 3, and 6 hourly) were compared to assess CYGNSS impact.

The OSSE experiment results on the 9-km domain are evaluated with respect to the HNR1 9-km domain. A combination of statistical evaluations of analysis and forecast errors and phenomenological evaluations of the OSSE hurricane 10-m wind fields demonstrates a number of consistent findings. Overall, the results show that impacts of assimilating simulated CYGNSS data on the analysis and forecasts are positive and that the OSSE system performance is sensitive to cycling frequency. Analysis and forecast errors for all experiments (CTRL, CYG, and VAM) are lowest for 3-hourly DA cycling and lower than 6- and 1-hourly errors with statistical significance greater than 95% for 0–36-h forecast lead times. This result demonstrates that the interaction between hurricane forecasts in the HWRF Model and 3-hourly application of GSI 3DVar are the most beneficial to the maintenance of a balanced cyclone during DA cycling. Therefore, the following summary of results focuses on 3-hourly DA cycling experiments, though similar results hold for all cycling frequencies.

For the 3-hourly DA cycling OSSE experiments, CYGNSS data improve the forecast intensity of the simulated hurricane over the first 48 h by 2–5 hPa for minimum sea level pressure and by 2–6 kt for maximum wind speed, compared to experiment CTRL3. These improvements are statistically significant at the 95% confidence level for the VAM3 experiment and at a 90% confidence level for the CYG3 experiment. There is no statistically significant reduction or increase in track error for OSSE experiments CYG3 or VAM3, compared to CTRL3. For forecast hours 48–96, the intensity improvement in the VAM3 experiment is reduced to near zero, and the intensity improvement in CYG3 experiment is still positive but with lower statistical confidence (i.e., <95%). This improvement in forecasts due to CYGNSS observations is also quantified as a reduction of integrated kinetic energy (IKE) error in all experiments that assimilate simulated CYGNSS data, compared to CTRL experiments. From examples of 24-h HWRF forecasts of the 10-m surface winds along with the validating HNR1 wind fields, the structure of the inner-core 10-m wind field in CYG and VAM experiment forecasts is improved, compared to CTRL experiments.

These results suggest that for forecast hours 0–36, assimilation of VAM CYGNSS vectors improves the intensity and structure of the 10-m wind field in HWRF forecasts more than assimilation of CYGNSS wind speed alone. When GSI 3DVar is applied to cases with partial coverage of the hurricane circulation by simulated CYGNSS wind observations, assimilation of CYGNSS wind speed routinely produces larger asymmetries in the analyzed hurricane wind field, compared to assimilating VAM CYGNSS vectors. The evidence for this can be seen in the reduction of mean MSLP and maximum wind speed errors for VAM experiments, compared to CYG experiments for forecast hours 0–36 h. Further examples in L18 show that the VAM wind vectors are dynamically consistent with the background. Greater impact from CYGNSS is anticipated when plans are realized to integrate the VAM into the HWRF DA system as a preprocessor for CYGNSS observations (L18). It should be noted that using HWRF short-term forecasts as backgrounds for VAM analyses will bias VAM CYGNSS vectors toward HWRF model solutions, including model errors. However, for the small spatial scales in the wind field near the centers of TCs, it is arguable that no better choice of backgrounds for VAM wind vector analyses exists for use in near-real-time operations.

The most important limitations of the present study are that static background error covariance (BEC) is used and that this is a single case study. Since TCs are highly structured phenomena, the true BECs are complex and poorly approximated by the static BECs used in this study. Ensemble and hybrid DA methods should be used in future OSSEs and observing system experiments (OSEs) to overcome this limitation. A comparison of multiscale GSI-based EnKF and 3DVar assimilation shows that the EnKF produces improved analysis and forecasts primarily due to local, flow-dependent background error covariances and cross-variable correlation (Johnson et al. 2015). One storm is clearly too small of a sample size to draw any general results, and a much larger sample of simulated TCs in different ocean basins is required to generate more robust error statistics. This study should be extended to multiple TCs using real data. The 2017 hurricane season provides the first opportunity to systematically observe tropical cyclones with the CYGNSS constellation. During this period, the authors plan to investigate the impact of real CYGNSS data, assimilating both scalar wind speed and VAM CYGNSS vectors, in OSEs that parallel HWRF operations.

Acknowledgments

This study was supported by NOAA—directly and through the Cooperative Agreement NA15OAR4320064 for the Cooperative Institute for Marine and Atmospheric Studies (CIMAS)—and by NASA through Award NNL13AQ00C. We thank Christopher Ruf at the University of Michigan and the CYGNSS Science Team for the simulated CYGNSS datasets, the NOAA Office of Weather and Air Quality for funding the initial development of the regional OSSE framework, the NOAA Hurricane Forecast Improvement Project for computing resources, the Developmental Testbed Center for the GSI and HWRF code and support, Sean Casey at CIMAS/AOML for providing the GFS Control data, and David Nolan at the University of Miami for providing the WRF nature run dataset.

REFERENCES

  • Aberson, S. D., M. L. Black, R. A. Black, J. J. Cione, C. W. Landsea, F. D. Marks, and R. W. Burpee, 2006: Thirty years of tropical cyclone research with the NOAA P-3 aircraft. Bull. Amer. Meteor. Soc., 87, 10391056, https://doi.org/10.1175/BAMS-87-8-1039.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Andersson, E. and M. Masutani, 2010: Collaboration on observing system simulation experiments (joint OSSE). ECMWF Newsletter, No. 123, ECMWF, Reading, United Kingdom, 14–16, https://doi.org/10.21957/62gayq76.

    • Crossref
    • Export Citation
  • Atlas, R., 1997: Atmospheric observations and experiments to assess their usefulness in data assimilation. J. Meteor. Soc. Japan, 75, 111130, https://doi.org/10.2151/jmsj1965.75.1B_111.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., R. N. Hoffman, S. C. Bloom, J. C. Jusem, and J. Ardizzone, 1996: A multiyear global surface wind velocity dataset using SSM/I wind observations. Bull. Amer. Meteor. Soc., 77, 869882, https://doi.org/10.1175/1520-0477(1996)077<0869:AMGSWV>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., S. C. Bloom, R. N. Hoffman, E. Brin, J. Ardizzone, J. Terry, D. Bungato, and J. C. Jusem, 1999: Geophysical validation of NSCAT winds using atmospheric data and analyses. J. Geophys. Res., 104, 11 40511 424, https://doi.org/10.1029/98JC02374.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., and Coauthors, 2001: The effects of marine winds from scatterometer data on weather analysis and forecasting. Bull. Amer. Meteor. Soc., 82, 19651990, https://doi.org/10.1175/1520-0477(2001)082<1965:TEOMWF>2.3.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., R. N. Hoffman, J. Ardizzone, S. M. Leidner, J. C. Jusem, D. K. Smith, and D. Gombos, 2011: A cross-calibrated, multiplatform ocean surface wind velocity product for meteorological and oceanographic applications. Bull. Amer. Meteor. Soc., 92, 157174, https://doi.org/10.1175/2010BAMS2946.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., L. Bucci, B. Annane, R. Hoffman, and S. Murillo, 2015a: Observing system simulation experiments to assess the potential impact of new observing systems on hurricane forecasting. Mar. Technol. Soc. J., 49, 140148, https://doi.org/10.4031/MTSJ.49.6.3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., V. Tallapragada, and S. Gopalakrishnan, 2015b: Advances in tropical cyclone intensity forecasts. Mar. Tech. Soc. J., 49, 149160, https://doi.org/10.4031/MTSJ.49.6.2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., and Coauthors, 2015c: Observing system simulation experiments (OSSEs) to evaluate the potential impact of an optical autocovariance wind lidar (OAWL) on numerical weather prediction. J. Atmos. Oceanic Technol., 32, 15931613, https://doi.org/10.1175/JTECH-D-15-0038.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Candy, B., S. J. English, and S. J. Keogh, 2009: A comparison of the impact of QuikScat and WindSat wind vector products on Met Office analyses and forecasts. IEEE Trans. Geosci. Remote Sens., 47, 16321640, https://doi.org/10.1109/TGRS.2008.2009993.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Casey, S. P. F., R. Atlas, S. A. Boukabara, R. N. Hoffman, K. Ide, M. Masutani, I. Moradi, and J. S. Woollen, 2016: Geostationary hyperspectral infrared constellation: Global observing system simulation experiments for five Geo-HSS instruments. 20th Conf. on Integrated Observing and Assimilation Systems for the Atmosphere, Oceans, and Land Surface (IOAS-AOLS), New Orleans, LA, Amer. Meteor. Soc., J7.4, https://ams.confex.com/ams/96Annual/webprogram/Paper283540.html.

  • Claziria, M. P., and V. Zavorotny, 2015: Algorithm theoretical basis document level 2 wind speed retrieval. University of Michigan Doc. 148-0138, 95 pp.

  • Entekhabi, D., and Coauthors, 2010: The Soil Moisture Active and Passive (SMAP) mission. Proc. IEEE, 98, 704716, https://doi.org/10.1109/JPROC.2010.2043918.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Errico, R. M., R. Yang, N. C. Privé, K. Tai, R. Todling, M. E. Sienkiewicz, and J. Guo, 2013: Development and validation of observing‐system simulation experiments at NASA’s Global Modeling and Assimilation Office. Quart. J. Roy. Meteor. Soc., 139, 11621178, https://doi.org/10.1002/qj.2027.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gall, R., J. Franklin, F. Marks, E. N. Rappaport, and F. Toepfer, 2013: The Hurricane Forecast Improvement Project. Bull. Amer. Meteor. Soc., 94, 329343, https://doi.org/10.1175/BAMS-D-12-00071.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gopalakrishnan, S. G., S. Goldenberg, T. Quirino, X. Zhang, F. Marks Jr., K.-S. Yeh, R. Atlas, and V. Tallapragada, 2012: Toward improving high-resolution numerical hurricane forecasting: Influence of model horizontal grid resolution, initialization, and physics. Wea. Forecasting, 27, 647666, https://doi.org/10.1175/WAF-D-11-00055.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hock, T. F., and J. L. Franklin, 1999: The NCAR GPS dropwindsonde. Bull. Amer. Meteor. Soc., 80, 407420, https://doi.org/10.1175/1520-0477(1999)080<0407:TNGD>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, R. N., 1982: SASS wind ambiguity removal by direct minimization. Mon. Wea. Rev., 110, 434445, https://doi.org/10.1175/1520-0493(1982)110<0434:SWARBD>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, R. N., 1984: SASS wind ambiguity removal by direct minimization. Part II: Use of smoothness and dynamical constraints. Mon. Wea. Rev., 112, 18291852, https://doi.org/10.1175/1520-0493(1984)112<1829:SWARBD>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, R. N., and R. Atlas, 2016: Future observing system simulation experiments. Bull. Amer. Meteor. Soc., 97, 16011616, https://doi.org/10.1175/BAMS-D-15-00200.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, R. N., S. M. Leidner, J. M. Henderson, R. Atlas, J. V. Ardizzone, and S. C. Bloom, 2003: A two-dimensional variational analysis method for NSCAT ambiguity removal: Methodology, sensitivity, and tuning. J. Atmos. Oceanic Technol., 20, 585605, https://doi.org/10.1175/1520-0426(2003)20<585:ATDVAM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, A., X. Wang, J. R. Carley, L. J. Wicker, and C. Karstens, 2015: A comparison of multiscale GSI-based EnKF and 3DVar data assimilation using radar and conventional observations for midlatitude convective-scale precipitation forecasts. Mon. Wea. Rev., 143, 30873108, https://doi.org/10.1175/MWR-D-14-00345.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Knaff, J. A., and R. M. Zehr, 2007: Reexamination of tropical cyclone wind–pressure relationships. Wea. Forecasting, 22, 7188, https://doi.org/10.1175/WAF965.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Komjathy, A., M. Armatys, D. Masters, P. Axelrad, V. Zavorotny, and S. Katzberg, 2004: Retrieval of ocean surface wind speed and wind direction using reflected GPS signals. J. Atmos. Oceanic Technol., 21, 515526, https://doi.org/10.1175/1520-0426(2004)021<0515:ROOSWS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Leidner, S. M., L. Isaksen, and R. N. Hoffman, 2003: Impact of NSCAT winds on tropical cyclones in the ECMWF 4DVAR assimilation system. Mon. Wea. Rev., 131, 326, https://doi.org/10.1175/1520-0493(2003)131<0003:IONWOT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Leidner, S. M., B. Annane, B. McNoldy, R. N. Hoffman, and R. Atlas, 2018: Variational analysis of simulated ocean surface winds from the Cyclone Global Navigation Satellite System (CYGNSS) and evaluation using a regional OSSE. J. Atmos. Oceanic Technol., https://doi.org/10.1175/JTECH-D-17-0136.1, in press,

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McNoldy, B. D., B. Annane, J. Delgado, L. Bucci, R. Atlas, S. J. Majumdar, M. Leidner, and R. N. Hoffman, 2016: Impact of CYGNSS data on tropical cyclone analyses and forecasts in a regional OSSE framework. 20th Conf. on Integrated Observing and Assimilation Systems for the Atmosphere, Oceans, and Land Surface (IOAS-AOLS), New Orleans, LA, Amer. Meteor. Soc., J6.6, https://ams.confex.com/ams/96Annual/webprogram/Paper285158.html.

  • McNoldy, B. D., B. Annane, S. Majumdar, J. Delgado, L. Bucci, and R. Atlas, 2017: Impact of assimilating CYGNSS data on tropical cyclone analyses and forecasts in a regional OSSE framework. Mar. Technol. Soc. J., 51, 715, https://doi.org/10.4031/MTSJ.51.1.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nolan, D. S., R. Atlas, K. T. Bhatia, and L. R. Bucci, 2013: Development and validation of a hurricane nature run using the joint OSSE nature run and the WRF Model. J. Adv. Model. Earth Syst., 5, 382405, https://doi.org/10.1002/jame.20031.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • O’Brien, A., 2014: CYGNSS end-to-end simulator technical memo. University of Michigan Doc. 148-0123, 23 pp., http://clasp-research.engin.umich.edu/missions/cygnss/reference/148-0123_CYGNSS_E2ES_EM.pdf.

  • Powell, M. D., and T. A. Reinhold, 2007: Tropical cyclone destructive potential by integrated kinetic energy. Bull. Amer. Meteor. Soc., 88, 513526, https://doi.org/10.1175/BAMS-88-4-513.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rappaport, E. N., and Coauthors, 2009: Advances and challenges at the National Hurricane Center. Wea. Forecasting, 24, 395419, https://doi.org/10.1175/2008WAF2222128.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rogers, R., P. Reasor, and S. Lorsolo, 2013: Airborne Doppler observations of the inner-core structural differences between intensifying and steady-state tropical cyclones. Mon. Wea. Rev., 141, 29702991, https://doi.org/10.1175/MWR-D-12-00357.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ruf, C. S., and Coauthors, 2016a: New ocean winds satellite mission to probe hurricanes and tropical convection. Bull. Amer. Meteor. Soc., 97, 385395, https://doi.org/10.1175/BAMS-D-14-00218.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ruf, C. S., and Coauthors, 2016b: CYGNSS Handbook. University of Michigan, 154 pp.

  • Schulz, E. W., J. D. Kepert, and D. J. M. Greenslade, 2007: An assessment of marine surface winds from the Australian Bureau of Meteorology numerical weather prediction systems. Wea. Forecasting, 22, 613636, https://doi.org/10.1175/WAF996.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tallapragada, V., and Coauthors, 2013: Hurricane Weather Research and Forecasting (HWRF) Model: 2013 scientific documentation. Developmental Testbed Center Tech. Rep., 99 pp., https://dtcenter.org/HurrWRF/users/docs/scientific_documents/HWRFv3.5a_ScientificDoc.pdf.

  • Uhlhorn, E. W., and D. S. Nolan, 2012: Observational undersampling in tropical cyclones and implications for estimated intensity. Mon. Wea. Rev., 140, 825840, https://doi.org/10.1175/MWR-D-11-00073.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Uhlhorn, E. W., P. G. Black, J. L. Franklin, M. Goodberlet, J. Carswell, and A. S. Goldstein, 2007: Hurricane surface wind measurements from an operational stepped frequency microwave radiometer. Mon. Wea. Rev., 135, 30703085, https://doi.org/10.1175/MWR3454.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Willoughby, H. E., E. N. Rappaport, and F. D. Marks, 2007: Hurricane forecasting: The state of the art. Nat. Hazards Rev., 8, https://doi.org/10.1061/(ASCE)1527-6988(2007)8:3(45).

    • Crossref
    • Search Google Scholar
    • Export Citation
Save
  • Aberson, S. D., M. L. Black, R. A. Black, J. J. Cione, C. W. Landsea, F. D. Marks, and R. W. Burpee, 2006: Thirty years of tropical cyclone research with the NOAA P-3 aircraft. Bull. Amer. Meteor. Soc., 87, 10391056, https://doi.org/10.1175/BAMS-87-8-1039.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Andersson, E. and M. Masutani, 2010: Collaboration on observing system simulation experiments (joint OSSE). ECMWF Newsletter, No. 123, ECMWF, Reading, United Kingdom, 14–16, https://doi.org/10.21957/62gayq76.

    • Crossref
    • Export Citation
  • Atlas, R., 1997: Atmospheric observations and experiments to assess their usefulness in data assimilation. J. Meteor. Soc. Japan, 75, 111130, https://doi.org/10.2151/jmsj1965.75.1B_111.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., R. N. Hoffman, S. C. Bloom, J. C. Jusem, and J. Ardizzone, 1996: A multiyear global surface wind velocity dataset using SSM/I wind observations. Bull. Amer. Meteor. Soc., 77, 869882, https://doi.org/10.1175/1520-0477(1996)077<0869:AMGSWV>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., S. C. Bloom, R. N. Hoffman, E. Brin, J. Ardizzone, J. Terry, D. Bungato, and J. C. Jusem, 1999: Geophysical validation of NSCAT winds using atmospheric data and analyses. J. Geophys. Res., 104, 11 40511 424, https://doi.org/10.1029/98JC02374.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., and Coauthors, 2001: The effects of marine winds from scatterometer data on weather analysis and forecasting. Bull. Amer. Meteor. Soc., 82, 19651990, https://doi.org/10.1175/1520-0477(2001)082<1965:TEOMWF>2.3.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., R. N. Hoffman, J. Ardizzone, S. M. Leidner, J. C. Jusem, D. K. Smith, and D. Gombos, 2011: A cross-calibrated, multiplatform ocean surface wind velocity product for meteorological and oceanographic applications. Bull. Amer. Meteor. Soc., 92, 157174, https://doi.org/10.1175/2010BAMS2946.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., L. Bucci, B. Annane, R. Hoffman, and S. Murillo, 2015a: Observing system simulation experiments to assess the potential impact of new observing systems on hurricane forecasting. Mar. Technol. Soc. J., 49, 140148, https://doi.org/10.4031/MTSJ.49.6.3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., V. Tallapragada, and S. Gopalakrishnan, 2015b: Advances in tropical cyclone intensity forecasts. Mar. Tech. Soc. J., 49, 149160, https://doi.org/10.4031/MTSJ.49.6.2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Atlas, R., and Coauthors, 2015c: Observing system simulation experiments (OSSEs) to evaluate the potential impact of an optical autocovariance wind lidar (OAWL) on numerical weather prediction. J. Atmos. Oceanic Technol., 32, 15931613, https://doi.org/10.1175/JTECH-D-15-0038.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Candy, B., S. J. English, and S. J. Keogh, 2009: A comparison of the impact of QuikScat and WindSat wind vector products on Met Office analyses and forecasts. IEEE Trans. Geosci. Remote Sens., 47, 16321640, https://doi.org/10.1109/TGRS.2008.2009993.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Casey, S. P. F., R. Atlas, S. A. Boukabara, R. N. Hoffman, K. Ide, M. Masutani, I. Moradi, and J. S. Woollen, 2016: Geostationary hyperspectral infrared constellation: Global observing system simulation experiments for five Geo-HSS instruments. 20th Conf. on Integrated Observing and Assimilation Systems for the Atmosphere, Oceans, and Land Surface (IOAS-AOLS), New Orleans, LA, Amer. Meteor. Soc., J7.4, https://ams.confex.com/ams/96Annual/webprogram/Paper283540.html.

  • Claziria, M. P., and V. Zavorotny, 2015: Algorithm theoretical basis document level 2 wind speed retrieval. University of Michigan Doc. 148-0138, 95 pp.

  • Entekhabi, D., and Coauthors, 2010: The Soil Moisture Active and Passive (SMAP) mission. Proc. IEEE, 98, 704716, https://doi.org/10.1109/JPROC.2010.2043918.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Errico, R. M., R. Yang, N. C. Privé, K. Tai, R. Todling, M. E. Sienkiewicz, and J. Guo, 2013: Development and validation of observing‐system simulation experiments at NASA’s Global Modeling and Assimilation Office. Quart. J. Roy. Meteor. Soc., 139, 11621178, https://doi.org/10.1002/qj.2027.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gall, R., J. Franklin, F. Marks, E. N. Rappaport, and F. Toepfer, 2013: The Hurricane Forecast Improvement Project. Bull. Amer. Meteor. Soc., 94, 329343, https://doi.org/10.1175/BAMS-D-12-00071.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gopalakrishnan, S. G., S. Goldenberg, T. Quirino, X. Zhang, F. Marks Jr., K.-S. Yeh, R. Atlas, and V. Tallapragada, 2012: Toward improving high-resolution numerical hurricane forecasting: Influence of model horizontal grid resolution, initialization, and physics. Wea. Forecasting, 27, 647666, https://doi.org/10.1175/WAF-D-11-00055.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hock, T. F., and J. L. Franklin, 1999: The NCAR GPS dropwindsonde. Bull. Amer. Meteor. Soc., 80, 407420, https://doi.org/10.1175/1520-0477(1999)080<0407:TNGD>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, R. N., 1982: SASS wind ambiguity removal by direct minimization. Mon. Wea. Rev., 110, 434445, https://doi.org/10.1175/1520-0493(1982)110<0434:SWARBD>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, R. N., 1984: SASS wind ambiguity removal by direct minimization. Part II: Use of smoothness and dynamical constraints. Mon. Wea. Rev., 112, 18291852, https://doi.org/10.1175/1520-0493(1984)112<1829:SWARBD>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, R. N., and R. Atlas, 2016: Future observing system simulation experiments. Bull. Amer. Meteor. Soc., 97, 16011616, https://doi.org/10.1175/BAMS-D-15-00200.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hoffman, R. N., S. M. Leidner, J. M. Henderson, R. Atlas, J. V. Ardizzone, and S. C. Bloom, 2003: A two-dimensional variational analysis method for NSCAT ambiguity removal: Methodology, sensitivity, and tuning. J. Atmos. Oceanic Technol., 20, 585605, https://doi.org/10.1175/1520-0426(2003)20<585:ATDVAM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Johnson, A., X. Wang, J. R. Carley, L. J. Wicker, and C. Karstens, 2015: A comparison of multiscale GSI-based EnKF and 3DVar data assimilation using radar and conventional observations for midlatitude convective-scale precipitation forecasts. Mon. Wea. Rev., 143, 30873108, https://doi.org/10.1175/MWR-D-14-00345.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Knaff, J. A., and R. M. Zehr, 2007: Reexamination of tropical cyclone wind–pressure relationships. Wea. Forecasting, 22, 7188, https://doi.org/10.1175/WAF965.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Komjathy, A., M. Armatys, D. Masters, P. Axelrad, V. Zavorotny, and S. Katzberg, 2004: Retrieval of ocean surface wind speed and wind direction using reflected GPS signals. J. Atmos. Oceanic Technol., 21, 515526, https://doi.org/10.1175/1520-0426(2004)021<0515:ROOSWS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Leidner, S. M., L. Isaksen, and R. N. Hoffman, 2003: Impact of NSCAT winds on tropical cyclones in the ECMWF 4DVAR assimilation system. Mon. Wea. Rev., 131, 326, https://doi.org/10.1175/1520-0493(2003)131<0003:IONWOT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Leidner, S. M., B. Annane, B. McNoldy, R. N. Hoffman, and R. Atlas, 2018: Variational analysis of simulated ocean surface winds from the Cyclone Global Navigation Satellite System (CYGNSS) and evaluation using a regional OSSE. J. Atmos. Oceanic Technol., https://doi.org/10.1175/JTECH-D-17-0136.1, in press,

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McNoldy, B. D., B. Annane, J. Delgado, L. Bucci, R. Atlas, S. J. Majumdar, M. Leidner, and R. N. Hoffman, 2016: Impact of CYGNSS data on tropical cyclone analyses and forecasts in a regional OSSE framework. 20th Conf. on Integrated Observing and Assimilation Systems for the Atmosphere, Oceans, and Land Surface (IOAS-AOLS), New Orleans, LA, Amer. Meteor. Soc., J6.6, https://ams.confex.com/ams/96Annual/webprogram/Paper285158.html.

  • McNoldy, B. D., B. Annane, S. Majumdar, J. Delgado, L. Bucci, and R. Atlas, 2017: Impact of assimilating CYGNSS data on tropical cyclone analyses and forecasts in a regional OSSE framework. Mar. Technol. Soc. J., 51, 715, https://doi.org/10.4031/MTSJ.51.1.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Nolan, D. S., R. Atlas, K. T. Bhatia, and L. R. Bucci, 2013: Development and validation of a hurricane nature run using the joint OSSE nature run and the WRF Model. J. Adv. Model. Earth Syst., 5, 382405, https://doi.org/10.1002/jame.20031.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • O’Brien, A., 2014: CYGNSS end-to-end simulator technical memo. University of Michigan Doc. 148-0123, 23 pp., http://clasp-research.engin.umich.edu/missions/cygnss/reference/148-0123_CYGNSS_E2ES_EM.pdf.

  • Powell, M. D., and T. A. Reinhold, 2007: Tropical cyclone destructive potential by integrated kinetic energy. Bull. Amer. Meteor. Soc., 88, 513526, https://doi.org/10.1175/BAMS-88-4-513.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rappaport, E. N., and Coauthors, 2009: Advances and challenges at the National Hurricane Center. Wea. Forecasting, 24, 395419, https://doi.org/10.1175/2008WAF2222128.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rogers, R., P. Reasor, and S. Lorsolo, 2013: Airborne Doppler observations of the inner-core structural differences between intensifying and steady-state tropical cyclones. Mon. Wea. Rev., 141, 29702991, https://doi.org/10.1175/MWR-D-12-00357.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ruf, C. S., and Coauthors, 2016a: New ocean winds satellite mission to probe hurricanes and tropical convection. Bull. Amer. Meteor. Soc., 97, 385395, https://doi.org/10.1175/BAMS-D-14-00218.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ruf, C. S., and Coauthors, 2016b: CYGNSS Handbook. University of Michigan, 154 pp.

  • Schulz, E. W., J. D. Kepert, and D. J. M. Greenslade, 2007: An assessment of marine surface winds from the Australian Bureau of Meteorology numerical weather prediction systems. Wea. Forecasting, 22, 613636, https://doi.org/10.1175/WAF996.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tallapragada, V., and Coauthors, 2013: Hurricane Weather Research and Forecasting (HWRF) Model: 2013 scientific documentation. Developmental Testbed Center Tech. Rep., 99 pp., https://dtcenter.org/HurrWRF/users/docs/scientific_documents/HWRFv3.5a_ScientificDoc.pdf.

  • Uhlhorn, E. W., and D. S. Nolan, 2012: Observational undersampling in tropical cyclones and implications for estimated intensity. Mon. Wea. Rev., 140, 825840, https://doi.org/10.1175/MWR-D-11-00073.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Uhlhorn, E. W., P. G. Black, J. L. Franklin, M. Goodberlet, J. Carswell, and A. S. Goldstein, 2007: Hurricane surface wind measurements from an operational stepped frequency microwave radiometer. Mon. Wea. Rev., 135, 30703085, https://doi.org/10.1175/MWR3454.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Willoughby, H. E., E. N. Rappaport, and F. D. Marks, 2007: Hurricane forecasting: The state of the art. Nat. Hazards Rev., 8, https://doi.org/10.1061/(ASCE)1527-6988(2007)8:3(45).

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Geometry of bistatic radar measurement of GPS-based quasi-specular surface scattering. The GPS direct signal (Transmitter) provides location, timing, and frequency references, while the forward scattered signal received by CYGNSS (Receiver) contains ocean surface information. Image from Claziria and Zavorotny (2015).

  • Fig. 2.

    Basic flowchart of the regional OSSE framework.

  • Fig. 3.

    Example of sampling of the North Atlantic by the simulated CYGNSS constellation, ±3 h around 1500 UTC 3 Aug 2005. The locations of simulated CYGNSS data in the 6-h window are plotted as colored dots. The blue and green dots show the locations of subsets of all observations within ±1.5 and ±0.5 h, respectively, of 1500 UTC. CYGNSS observation locations are overlaid on HNR1, 27-km-resolution (d01) 10-m wind speed field, valid at the same time.

  • Fig. 4.

    Configuration of model domains. The 27-km-resolution domain (d01) of HNR1 is shown in blue, and the 9-km (d01) and nested 3-km (d02) OSSE grids are shown in black.

  • Fig. 5.

    Average storm forecast errors with light ± standard deviation lines plotted for (a),(d),(g) 6-; (b),(e),(h) 3-; and (c),(f),(i) 1-hourly DA cycling experiments. Mean errors/deviations are colored by OSSE experiment: black/gray for CNTL, red/light red for CYG, and blue/light blue for VAM.

  • Fig. 6.

    (a) MSLP forecast error and (b) maximum wind speed forecast error of experiments CNTL3 (heavy dashed black) and CNTL1 (solid black) with respect to CNTL6 forecast errors. The 95th-percentile CIs are plotted: two-sided CIs are plotted in transparent gray, and one-sided CIs are plotted with a thin dash–dotted line for CNTL3 and a dotted line for CNTL1. (c),(d) As in (a),(b), but for CYG3 and CYG1 errors with respect to CYG6 forecast errors. (e),(f) As in (a),(b), but for VAM3 and VAM 1 errors with respect to VAM6 forecast errors.

  • Fig. 7.

    (a) MSLP forecast error and (b) maximum wind speed forecast error of experiments CYG3 (red) and VAM3 (blue) with respect to CNTL3. The 95th-percentile CIs are plotted: two-sided CIs are plotted in transparent colors, and one-sided CIs are plotted with thin dotted lines.

  • Fig. 8.

    Large-scale, domain-averaged, 10-m wind errors (RMS; m s−1) for (a) 6-hourly DA cycling, (b) 3-hourly DA cycling, and (c) hourly cycling. Experiments are plotted by color as in Fig. 5.

  • Fig. 9.

    Absolute IKE error (TJ) as a function of forecast hour for (a) 6-hourly DA cycling, (b) 3-hourly DA cycling, and (c) hourly cycling. Error is the difference between OSSE experiment IKE and NR IKE (HNR1).

  • Fig. 10.

    (a) NR 10-m wind speed valid at 1800 UTC 4 Aug and (b)–(d) 24-h forecasts of 10-m wind speed from OSSE experiments CNTL3, CYG3, and VAM3, valid at the same time as (a). The instantaneous wind maximum Vmax is labeled in the lower left in each panel.

  • Fig. 11.

    As in Fig. 10, but for (a) NR valid time of 1800 UTC 5 Aug and (b)–(d) 24-h OSSE experiment forecasts valid at 1800 UTC 5 Aug.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 758 168 8
PDF Downloads 490 98 5