1. Introduction
Numerical weather prediction forecast skill has improved in the last 20 years for a variety of reasons, such as advanced data assimilation systems and inclusion of satellite radiance data (i.e., Janoušek et al. 2012; Yang 2014). Specifically, assimilation of satellite radiance data, including the Microwave Humidity Sounder (MHS), using variational data assimilation (DA) systems has had positive forecast impacts in global forecast systems (Andersson et al. 1991; Mo et al. 1995; Derber and Wu 1998; Bouttier and Kelly 2001; Simmons and Hollingsworth 2002; Zapotocny et al. 2007; Cardinali 2009; Collard et al. 2011; Joo et al. 2013). However, satellite radiance assimilation has diminished positive forecast impacts in limited-area (regional) modeling systems using variational DA (Zapotocny et al. 2005; Xu et al. 2009), which may be due to a variety of factors including nonuniform satellite coverage (Schwartz et al. 2012) or the influence of lateral boundary conditions within the regional domain (Warner et al. 1997).
Forecasts initialized from analyses using more advanced DA systems, such as the ensemble Kalman filter (EnKF; Evensen 1994; Burgers et al. 1998; Houtekamer and Mitchell 1998), are often better than forecasts initialized by three-dimensional variational DA and comparable to forecasts initialized by four-dimensional variational schemes on global and regional scales (Meng and Zhang 2008a,b; Szunyogh et al. 2008; Whitaker et al. 2008; Miyoshi et al. 2010; Torn 2010; Hamill et al. 2011; Zhang et al. 2011; Zhang et al. 2013). Assimilation of satellite radiances within an ensemble DA framework also has positive forecast impacts in global models (Houtekamer et al. 2005; Miyoshi and Sato 2007; Buehner et al. 2010a,b; Miyoshi et al. 2010; Aravéquia et al. 2011; Hamill et al. 2011); therefore, several studies have begun to examine the impact of satellite radiances in regional modeling systems.
Schwartz et al. (2012) use an ensemble adjustment Kalman filter (EAKF; Anderson 2001, 2003; Liu et al. 2007) to examine the impact of satellite microwave radiance assimilation in a regional model on the track, intensity, and rainfall forecasts of a typhoon near Taiwan. They show that assimilating radiances improve wind and sea level pressure forecasts when the typhoon was weakening, but found radiance DA had no clear signal during the intensifying period of the typhoon. Liu et al. (2012, hereafter referred to as L12) demonstrate that assimilating Advanced Microwave Sounding Unit A (AMSU-A) radiances in addition to conventional observations produces significant reductions in track error for five tropical cyclones in the North Atlantic, and they attribute this to improved analyses and forecasts in the large-scale temperature, wind, and height fields over the mid-Atlantic Ocean.
While Schwartz et al. (2012) assimilate observations from AMSU-A, AMSU-B, and MHS sensors, they do not focus on the impacts of a specific instrument, while L12 only examines AMSU-A impacts. L12 specifically notes that future work should examine radiance assimilation from other instruments. This study builds on Schwartz et al. (2012) and L12 by assessing the impacts from assimilating MHS radiances in addition to AMSU-A and conventional observations. Notably, the MHS is designed to retrieve profiles of atmospheric water vapor (Davis 2007; NOAA 2014), and atmospheric moisture profiles have been shown to impact tropical convection and cyclones (Tompkins 2001; Dunion and Velden 2004; Hill and Lackmann 2009; Holloway and Neelin 2009). Therefore, an in-depth evaluation of moisture is undertaken.
An overview of the model, assimilation strategy, and data assimilated is provided in section 2. Forecast verification results using both reanalysis and observational data are presented in section 3, focusing on the full computational domain as well as a subset of the domain. Section 4 examines aspects of tropical cyclone (TC) forecast verification, followed by summary statements and possible future directions in section 5.
2. Model setup and data
Two experiments were conducted to investigate the impacts of MHS radiance assimilation over the North Atlantic Ocean (Fig. 1). The baseline experiment includes assimilation of AMSU-A radiances in addition to conventional observations, hereafter referred to as AMSA. The second experiment builds on the AMSA run and also assimilates MHS radiance observations, hereafter referred to as AMHS. The MHS sensor is a five-channel scanning microwave radiometer with three channels that are sensitive to tropospheric humidity (NOAA 2014). The model setup, experimental design, and assimilation strategy of the AMSA experiment exactly follow L12, which allows for isolation of the impacts of assimilating MHS radiances. Both experiments used the Advanced Research version of the Weather Research and Forecasting Model, version 3.2.1 (ARW; Skamarock et al. 2008), initialized from EAKF analyses generated using the Data Assimilation Research Testbed (DART; Anderson et al. 2009) with the WRF-specific interface. The experiment period was 0000 UTC 11 August–0000 UTC 13 September. The domain is pictured in Fig. 1, which features much of the contiguous United States (CONUS) as well as the Atlantic basin. In addition to the full computational domain, a subset of the domain (hereafter referred to as the subdomain) was identified as an area of interest for additional verification, indicated by the dashed box (Fig. 1). The subdomain was included to provide focused verification on the primary region of TC development and maturation during the experimental period.
Computational domain for the experiments with the verification subdomain denoted with red dashed box.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
For both experiments, the model was configured with 36-km horizontal grid spacing, 45 vertical levels, and a 20-hPa model top. The model parameterizations were the WRF single-moment 5-class microphysics scheme (WSM5; Hong et al. 2004), the Goddard shortwave (Chou and Suarez 1994) and Rapid Radiative Transfer Model (RRTM) longwave (Mlawer et al. 1997; Cavallo et al. 2011) radiation schemes, the Yonsei University (YSU) boundary layer scheme (Hong et al. 2006), the Noah land surface model (Chen and Dudhia 2001), and the Kain–Fritsch cumulus parameterization (Kain and Fritsch 1990). New 96-member EAKF analyses were produced every 6 h using a full cycling configuration. Ensemble lateral boundary conditions (LBCs) were provided from perturbed Global Forecast System (GFS) analyses and forecasts (Torn et al. 2006). Adaptive inflation (Anderson et al. 2009) was used to maintain ensemble spread. Deterministic 72-h ARW forecasts were initialized from the 0000 and 1200 UTC ensemble mean analyses.
The data assimilated for AMSA and AMHS were identical, with the addition of the MHS radiance data in the AMHS experiment. Table 1 summarizes the conventional observations assimilated, as well as the specific satellite sensor and channels for which radiances were assimilated. The AMSU-A channels assimilated follow L12 and the two MHS channels assimilated provide upper- and middle-tropospheric moisture information. Other channels on MHS were not assimilated because of their primary temperature sensitivity (channel 1) or channel weighting function intersection with the surface (channels 1, 2, and 5) (NOAA 2014). The AMSU-A and MHS sensors are both aboard the NOAA-18 and MetOp-2 satellites, resulting in nearly identical overpasses for the sensors (Fig. 2). Figure 2 shows an example 1200 UTC overpass for MetOp-2 for AMSU-A (Fig. 2a) and MHS (Fig. 2d), as well as NOAA-18 for AMSU-A (Fig. 2b) and MHS (Fig. 2e). The MetOp-2 satellite provides the only overpass in the computational domain at 0000 UTC, shown for both AMSU-A (Fig. 2c) and MHS (Fig. 2f). The average number of data points assimilated for AMSU-A and MHS are of the same magnitude [e.g., MetOp-2 AMSU-A channel 5 ensemble member 10 assimilated 3122 (1200 UTC 22 August 2008) and 1668 (0000 UTC 3 September 2008), while MetOp-2 MHS channel 3 ensemble member 10 assimilated 3205 (1200 UTC 22 August 2008) and 1681 (0000 UTC 3 September 2008)]. Radiance data were thinned on a 72-km grid over nonprecipitating grid boxes and all observations were assimilated with a ±1.5-h assimilation window. Identical conventional observations were assimilated for both experiments (Table 1), preprocessing of the aircraft and satellite wind data follows Torn and Hakim (2008), and the storm position and intensity information was assimilated following Chen and Snyder (2007).
Conventional observations and radiances assimilated.
Location of overpasses for channel 5 AMSU-A at 1200 UTC 22 Aug 2008 for platforms (a) MetOp-2 and (b) NOAA-18 and at 0000 UTC 3 Sep 2008 for (c) MetOp-2. (d)–(f) As in (a)–(c), but for channel 3 MHS. Color bar specifies brightness temperature values (K).
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
To assimilate radiance data using DART, the WRF-Data Assimilation (WRFDA; Barker et al. 2012) system was used as the radiance forward operator for computing radiance prior ensembles (Schwartz et al. 2012; Hamill et al. 2011; Houtekamer et al. 2005). Within WRFDA, the Community Radiative Transfer Model (CRTM; Han et al. 2006; Liu and Weng 2006) calculates model-simulated brightness temperatures from WRF temperature and moisture profiles. Following L12, radiance bias correction coefficients were calculated from 3-month statistics by running WRFDA’s Variational Bias Correction (VarBC; Derber and Wu 1998; Dee 2005; Auligné et al. 2007; Schwartz et al. 2012; L12) in an offline mode for the 3 months prior to the beginning of the experiment to provide spun-up bias correction coefficients (L12). The VarBC scheme is an adaptive bias correction scheme that provides better bias correction over static schemes in offline mode, and is a good compromise between a static scheme and full offline adaptive schemes (Auligné et al. 2007). In addition to applying bias correction coefficients for all radiance data, quality control (QC) checks within WRFDA were also utilized to ensure the assimilation of high quality radiance data. Figure 3 illustrates example quality-controlled AMSU-A and MHS data before (Figs. 3a,b) and after (Figs. 3c,d) bias correction. Note the noisier appearance in the MHS data relative to the AMSU-A data both before and after bias correction, likely stemming from a less effective QC procedure for the MHS data leading to impacts on analysis and forecast quality (Yan et al. 2010; Guan et al. 2011; Zou et al. 2013; see also section 3).
Scatterplot of background brightness temperature (K) vs observation brightness temperature (K) for AMSU-A channel 5 (a) before and (c) after bias correction, and MHS channel 3 (b) before and (d) after bias correction for MetOp-2.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
For the AMSU-A sensor, the vertical location of each radiance observation in WRFDA was taken at the level of the maximum of the weighting function, which varies both spatially and temporally (Hamill et al. 2011; Schwartz et al. 2012; L12). Because the MHS sensor is sensitive to both temperature and moisture, a new weighting function is defined, requiring modification to the WRFDA system. The new weighting function is defined as d(Tr)/dp, where p is pressure and Tr is transmittance, which is calculated in WRFDA using layer optical depth from the CRTM. This weighting function is primarily dependent on moisture, with a secondary temperature dependence, rather than using either the pure temperature d(Tb)/dT, where Tb is brightness temperature (as in AMSU-A), or the moisture Jacobian as the weighting function (Fig. 4). AMSU-A channels 5–7 correspond to weighting functions peaking around 700, 400, and 250 hPa, respectively, whereas the MHS weighting functions for channels 3 and 4 peak in the midtroposphere (700–400 hPa). The peak levels of the flow-dependent weighting functions are used for vertical localization, where increments were constrained vertically to within ±1.9–6.4 km of an observation depending on observation density (L12).
Normalized vertical weighting functions for (a) AMSU-A (5, 6, and 7 = blue, green, and red) and (b) MHS (3 and 4 = blue and green) channels assimilated.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
3. Forecast verification
Verification of each experiment was performed against the European Centre for Medium-Range Weather Forecasts interim reanalysis (ERA-Interim, also known as ERA-I; Dee et al. 2011) using the Model Evaluation Tools (MET) verification software (Brown et al. 2009). The ERA-I dataset is generally considered to be the highest quality global reanalysis available (Trenberth et al. 2011; Lorenz and Kunstmann 2012; Trenberth and Fasullo 2013). The forecast system used to generate ERA-I also produces high quality TC track forecasts (Fiorino 2009). Although ERA-I assimilates AMSU-A and MHS radiance data (Dee et al. 2011), the analysis is generated independently of the GFS initial and boundary conditions and uses different data assimilation procedures. While the ERA-I spatial resolution is T255 spectral (approximately 80 km), which is coarser than the 36-km ARW model runs, both models have convective parameterization with verification focusing on time-averaged continuous statistics. Still, ERA-I can be considered the highest quality independent verification analysis, providing the best available atmospheric state estimates.
MET provides a community-supported, flexible framework for rigorous verification over user-specified times and areas. The time-aggregated MET verification over the full and subdomain (Fig. 1) for all forecast initialization times is discussed first. Additional full and subdomain verification is performed using precipitable water data from the National Aeronautics and Space Administration (NASA) Water Vapor Project (NVAP) under the NASA Making Earth System Data Records for Use in Research Environments (MEaSUREs) program (NVAP-M; Vonder Haar et al. 2012) dataset. The NVAP-M is a high quality, consistently generated, gridded dataset based only on observations used to estimate global atmospheric water vapor and is useful for climate or more focused weather studies (Vonder Haar et al. 2012). As with ERA-I, the NVAP-M dataset provides an independent dataset for verification.
a. ERA-I verification
Vertical profiles of specific humidity and temperature aggregated temporally and spatially over all analyses and 48-h forecasts are shown in Fig. 5. To remove the variability caused by common forecast challenges, the error differences rather than the errors themselves can be used to provide a powerful test to determine statistically significant differences (Hamill 1999). Consequently, this pairwise difference technique is used herein to determine whether the differences between the experiments are statistically significant. The black dashed lines in Fig. 5 show the pairwise differences and associated 95% parametric confidence intervals (CIs), where each difference is statistically significant when the CIs do not encompass zero. Note that all statistical significance discussion in this work uses the parametric 95% CIs to determine statistical significance. For the analysis time, the AMSA mean temperature biases as compared to ERA-I are smaller than those of AMHS for all levels (Fig. 5a). The same is true for the 48-h forecast time (Fig. 5b), although the differences in mean error of the two configurations are substantially reduced relative to the analysis time, and there are only very small differences in the lowest 100 hPa. For temperature, AMSA outperforms AMHS for all levels at both the analysis and 48-h forecast lead time. Conversely, the specific humidity bias at the analysis time has pairwise differences favoring the AMSA experiment at the lowest levels (1000–925 hPa; Fig. 5c), with pairwise differences showing improvement for the remaining vertical levels (850–400 hPa) in the AMHS simulation. For the 48-h forecast time (Fig. 5d), the statistically significant differences favoring AMSA at the lowest levels are no longer present, but differences favoring AMHS are still evident from 850 to 500 hPa.
Vertical profiles of (a) aggregated analysis and (b) 48-h forecast temperature (K) biases against ERA-I over the full computational domain. (c),(d) As in (a),(b), but for specific humidity (g kg−1): AMHS (green), AMSA (blue), and AMHS − AMSA (black) with associated 95% parametric confidence intervals.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
To summarize the differences between the configurations over many vertical levels, forecast hours, and parameters, statistical significance tables are shown in Figs. 6 and 7. For temperature over the full domain (Fig. 6), most levels and lead times show statistically significant pairwise differences favoring AMSA for bias while there are mixed results between AMSA and AMHS for the zonal U and meridional V wind fields. When examining specific humidity, more differences favoring the AMHS experiment, particularly at 500 and 700 hPa, become apparent. In general, the root-mean-square error (RMSE) results favor the AMSA configuration for temperature and wind. Zou et al. (2013) describe forecast degradations associated with MHS assimilation attributed to the MHS QC algorithm in the Gridpoint Statistical Interpolation analysis system (GSI; which uses innovation checks for observation QC similar to WRFDA) failing to identify cloudy radiances in particular cases, and proposed an improved QC algorithm. The large number of statistically significant differences favoring the AMSA experiment for RMSE statistics may be due to the less effectively quality-controlled data with the assimilated MHS brightness temperatures compared to AMSU-A (Fig. 3); for example, some lesser quality observations still pass through the rigorous QC checks and negatively impact the analysis, which then propagates through the forecast period (Yan et al. 2010; Zou et al. 2013). These results provide further support for implementing enhanced QC when assimilating MHS radiances.
Statistical and practical significance table for full domain for 0-, 24-, 48-, and 72-h lead times at 925, 850, 700, 500, and 250 hPa. Shaded cells with solid text indicate statistical significance whereas dark shading with boldface white text indicates practical significance. Cells with dashes were not considered. Green shading indicates differences favoring AMHS and blue shading indicates differences favoring AMSA.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
As in Fig. 6, but for verification over the subdomain.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
The subdomain (7°–25°N, 20°–75°W; see Fig. 1) was identified as a region of interest for the five TCs during the experimental time period. Additionally, this region may have influences from dry areas, such as an intrusion of a Saharan air layer (SAL). Figure 7 shows that when gridpoint verification against ERA-I is constrained to the subdomain, a number of statistically significant differences favoring the AMSA over the full domain become statistically indistinguishable, and more statistically significant differences favoring the AMHS configuration are present.
Verification statistics aggregated over more than a month’s time period with two deterministic forecast cycles a day generate a vast number of paired grid points. Additionally, the use of paired differences reduces the variance as compared to examining raw model output (Hamill 1999). Therefore, it is expected that there will be many statistically significant pairwise differences as a result of the large sample sizes and smaller variances reducing the pairwise CIs, and that many of these differences may be very small. Verification dissemination noting all statistically significant differences (e.g., Fig. 6) may hinder the ability of an operational center decision-maker to easily examine areas of important configuration differences. In an attempt to highlight potentially more meaningful forecast differences between the two experiments, a practical significance test is applied to the pairwise differences (United Nations Economic Commission for Europe 2010). For a statistically significant difference to be considered practically significant, the pairwise difference CI must lie outside the practical significance threshold (practically significant equals statistically significant if the practical significance threshold is zero). Practical significance tests are designed to be flexible in a way traditional statistical significance tests are not; statistical significance tests always use zero difference for the null hypothesis, while practical significance tests can have user-defined thresholds. Flexibility in forecast verification metrics is an area of active research, but the need for such metrics has been clearly identified recently (Casati et al. 2008; Ebert et al. 2013).
The practical significance threshold in the current study is determined by using observational measurement uncertainties specified by the World Meteorological Organization (WMO 2010). The WMO standards of measurements establish measurement uncertainty and instrument performance standards for various variables: temperature and dewpoint temperature are required to be measured within 0.1 K and wind speed to be measured within 0.5 m s−1. These values are applied to the statically significant differences to determine practical significance. Because specific humidity is not a directly measured quantity (therefore there is no WMO standard to follow), the dewpoint temperature measurement standard (0.1-K uncertainty) was converted to specific humidity using the average temperature value for each pressure level in the full domain in order to obtain an equivalent practical significance threshold for specific humidity.
In Figs. 6 and 7, the values that meet the practical significance threshold are shaded darker with boldface white text. For the full domain (Fig. 6), the only practically significant values are for specific humidity at 500 hPa. All other shaded cells are statically significant, but the differences are too small to be considered practically significant. Additionally, the subdomain (Fig. 7) reveals practically significant differences favoring AMHS over AMSA for specific humidity at both 500 and 700 hPa. This result reveals that the assimilation of MHS data, which is designed to primarily observe moisture, has significant positive impact for the field it is designed to measure.
Because practically significant differences are present only for midlevel specific humidity, the majority of the forthcoming analysis focuses on specific humidity differences at the 700- and 500-hPa levels. Figure 8 shows the average analysis and 48-h specific humidity bias at 700 hPa between the model simulations and the ERA-I results [i.e., AMSA−ERA-I (Figs. 8b,e) and AMHS−ERA-I (Figs. 8c,f)]. Figures 8b and 8c show the analysis differences between AMSA (Fig. 8b), AMHS (Fig. 8c), and ERA-I, which indicate that both configurations are very similar to ERA-I in the midlatitudes and too moist in the tropics. The 48-h forecast times (Figs. 8e,f) show that both experiments are generally drier than ERA-I in the midlatitudes, particularly over the CONUS, and are increasingly moister in the tropics. For both the analysis time and the 48-h forecast time, the AMSA run tends to be moister in the tropics compared to AMHS, and conversely, the AMHS run tends to exhibit a drier midtroposphere in the midlatitudes compared to the AMSA run. Although AMHS tends to be slightly too dry over the midlatitudes, both runs exhibit an overall positive moisture bias relative to ERA-I (Table 2). AMSA has slightly larger bias and RMSE statistics for both the analysis and 48-h forecast time with biases typically 0.04 g kg−1 larger than AMHS and RMSE values typically 0.02 g kg−1 larger than AMHS for the entire domain.
The 700-hPa specific humidity (g kg−1) fields aggregated over the entire experiment period for the analysis time for (a) ERA-I, (b) AMSU-A differences from ERA-I, and (c) AMHS differences from ERA-I. (d)–(f) As in (a)–(c), but for 48-h model forecasts.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
Mean specific humidity bias and RMSE statistics associated with the full and subdomain at 700 hPa for analysis time and 48-h forecast lead time.
When considering the subdomain, Fig. 8 shows that both experiments are too moist at 700 hPa over the subdomain region relative to ERA-I. Both the analysis (Figs. 8b,c) and 48-h forecast times (Figs. 8e,f) show a larger moisture bias in the AMSA run compared to the AMHS, with 0.82 g kg−1 for AMSA compared to 0.71 g kg−1 for AMHS at the analysis time and 0.93 g kg−1 for AMSA compared to 0.85 g kg−1 for AMHS at the 48-h forecast time (Table 2). Domain and subdomain averaged statistics in Table 2 highlight larger differences between the runs in the subdomain relative to the full domain, again favoring the AMHS run.
The pairwise difference technique for statistical significance was employed spatially in Fig. 9. The red hatching indicates areas where the AMHS run has statistically significant improvement over the AMSA configuration, whereas the black hatching indicates the grid points that have statistically significant improvement for AMSA for the entire period. The analysis time (Fig. 9a) shows many differences favoring AMHS, predominately in the subtropical high and easterly wave regions, ranging from 20° to 70°W, and extending as far north as 40°N. Importantly, there is high spatial coherence of the pairwise differences favoring AMHS, indicating meaningful improvements over this region. The AMSA differences are statistically significant predominately over Central America and in the eastern Pacific Ocean, between 70° and 100°W, and reach to 20°N. At 48-h lead time (Fig. 9b), the differences are still generally in the same regions, with the differences favoring the AMHS results maintaining their spatial coherency; however, there are fewer statistically significant differences for both simulations.
Temporally averaged pairwise differences for the 700-hPa specific humidity (g kg−1) field for the analysis time for (a) AMHS differences − AMSA differences from ERA-I [(AMHS-ERA) − (AMSA-ERA)], where statistically significant differences favoring AMSA are shown in black and statistically significant differences favoring AMHS are shown in red. (b) As in (a), but for 48-h model forecasts.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
Figure 10 shows time series of aggregated biases against ERA-I for temperature and specific humidity for 500 and 700 hPa over the subdomain. Pairwise differences for temperature (Fig. 10a) show that the AMSA is better than the AMHS configuration out to 18 h at 700 hPa and show no differences between the runs for 500 hPa. For specific humidity (Fig. 10b), there are improvements in the AMHS configuration for all lead times at 700 hPa, and out to 36 h at 500 hPa.
Aggregated forecast bias by lead time for 500 and 700 hPa (a) temperature (K) and (b) specific humidity bias (g kg−1) over the subdomain: AMHS 500 (green solid) and 700 (green dashed) hPa; AMSA as for AMHS, but blue; and AMHS − AMSA as for AMHS, but black (500 hPa) and gray (700 hPa) with associated 95% parametric confidence intervals.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
b. Verification using NVAP-M
To verify the model runs against an observational dataset, the NVAP-M dataset was used. The NVAP-M product consists of multiple merged sources of atmospheric water vapor measurements to create a global database of total and layered precipitable water vapor (PW). Here, the total PW is averaged over the entire experimental time period for both the NVAP-M dataset and the model runs, with a landmask applied to consider PW over water only. Averages over the experimental period were used in lieu of shorter time periods to improve the spatial sampling in the NVAP-M product through the reduction of missing values and averaging of potentially uncertain PW observations. Figure 11 shows the comparison of each model run against the NVAP-M mean PW for the analysis and the 48-h forecast time. At the analysis time (Figs. 11b,c), both configurations are again predominately too moist, with a larger area of PW bias greater than 4.2 mm in the AMSA experiment. For the 48-h forecast time (Figs. 11e,f), both experiments are increasingly more moist, with areas of high moisture bias greater than 5.4 mm occurring in the easterly wave regions and the eastern Pacific Ocean, as well as off the mid-Atlantic coast of the United States. Specifically for the subdomain, the bias for the AMHS run (1.2 mm) is smaller than the AMSA run (1.5 mm) at the analysis time. The analysis RMSE is also smaller for the AMHS experiment (1.8 mm) as compared to the AMSA configuration (2.0 mm). The same is true for the 48-h lead time with biases of 1.3 and 1.6 mm and RMSE values of 2.3 and 2.4 mm for the AMHS and AMSA runs, respectively. Overall, the AMHS experiment seems to improve PW analyses and 48-h forecasts over the AMSA configuration for large portions of the computational domain and most significantly in the subdomain.
NVAP-M mean column PW (mm) over water aggregated over the experiment time period for the (a) analysis times and (d) valid times associated with the 48-h model forecasts. AMSA differences from NVAP-M over water aggregated over the experiment time period for the (b) analysis time and (e) 48-h forecast. (c),(f) As in (b),(e), but for AMHS differences from NVAP-M. Bias and RMSE values in the top left of (b)–(f) are for the subdomain denoted by the black dotted line.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
The NVAP-M dataset was aggregated for the entire month to improve sampling given the dataset has missing data over large spatial extents each day. Therefore, spatial and temporally averaged bias and RMSE statistics for the model simulations with 95% confidence intervals are used to examine statistical significance and are shown in Fig. 12. The average bias (Fig. 12a) shows the AMHS experiment has a statistically significantly lower positive precipitable water vapor bias than does the AMSA experiment for all forecast lead times. The RMSE (Fig. 12b) highlights lower mean RMSE values for the AMHS, with statistically significantly smaller values for AMHS out to 24 h.
Aggregated forecast lead times total PW vapor (a) bias (mm) and (b) RMSE (mm) verified against NVAP-M over the subdomain for AMHS (green) and AMSA (blue). Bars denote associated 95% parametric confidence intervals.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
4. Tropical cyclone verification
L12 focused on track forecasts for the five named TCs that occurred in the domain during the experimental period: Fay, Gustav, Hanna, Ike, and Josephine (Fig. 13). Tropical Storm Fay remained a tropical storm for her lifetime, reaching a peak intensity of 110 km h−1 and a minimum sea level pressure (SLP) of 986 hPa and made landfall in Florida. Major Hurricane Gustav reached category 4 intensity (250 km h−1, 941 hPa) and had many interactions with land (Jamaica, Cuba, and the United States). Hurricane Hanna reached category 1 intensity (140 km h−1) with a minimum SLP of 977 hPa. Major Hurricane Ike also reached category 4 strength (230 km h−1, minimum SLP of 935 hPa) before making landfall in Texas. L12 found statistically significant improvements in mean absolute TC track, intensity, and SLP forecasts in the AMSA configuration when compared to a run without AMSU-A radiances assimilated.
NHC best tracks for the five tropical cyclones during the simulation period: 6 = Fay, 7 = Gustav, 8 = Hanna, 9 = Ike, and 10 = Josephine. Line segments are defined by NHC best-track storm classification for tropical depressions (dark green), tropical storms (yellow), hurricanes (red), extratropical cyclones (black dashed), and remnant lows (purple dashed).
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
Figure 14 shows the mean absolute TC track, intensity (1-min mean maximum wind), and minimum SLP verification for the AMHS experiment compared to the AMSA. Errors were computed with respect to the National Hurricane Center best-track dataset via MET-Tropical Cyclone (MET-TC; Developmental Testbed Center 2014). Differences in TC track errors between the AMSA and AMHS configurations were not statistically significant at the 95% level (Fig. 14a). However, the AMHS had lower track errors out to 30 h (statistically significant at 90% CI for 12 h; not shown), while beyond 42 h, the AMSA had lower track errors (statistically significant at 90% CI for 54 h; not shown). This neutral result is somewhat expected given the minimal differences for the U- and V-wind components (Fig. 7). AMHS has intensity forecast improvements over AMSA at 18- and 24-h lead times with a tendency to outperform AMSA for all lead times through 54 h (Fig. 14b) (18-, 24-, and 36-h lead times are statistically significant at 90%; not shown). When considering the raw intensity errors (or bias) (Fig. 15a), the AMHS configuration underintensifies less relative to the AMSA configuration out to 60 h with differences favoring the AMHS configuration at all lead times through 54 h. For absolute minimum SLP errors (Fig. 14c), AMHS and AMSA have no differences through 48 h, with AMSA better than AMHS for lead times longer than 48 h. When raw minimum SLP errors are considered (Fig. 15b), Fig. 15b shows that AMHS consistently produces lower mean minimum SLP values compared to the AMSA configuration, which results in improvements out to 36 h, and degradations from 42 to 54 h when the AMSA configuration has a bias closer to zero. The differences in performance between absolute and raw errors arise because absolute errors describe error variance while raw errors describe bias. The AMHS and AMSA configurations have similar error variances at short lead times (less than ~48 h; Fig. 14c), but the AMHS configuration has a smaller bias at short lead times (Fig. 15b) and a more negative bias at longer lead times (Fig. 15b) along with a larger variance than the AMSA configuration (Fig. 14c).
Mean absolute (a) track [nautical miles (n mi; 1 n mi = 1.852 km)], (b) intensity [knots (kt; 1 kt = 0.51 m s−1)], and (c) minimum mean sea level pressure (hPa) errors for AMSA (blue), AMHS (green), and mean pairwise differences AMHS − AMSA (black) with 95% confidence intervals with respect to forecast lead time. Filled pairwise difference markers indicate statistically significant differences. The sample size at each lead time is indicated above.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
Mean (a) intensity (kt) and (b) minimum mean SLP (hPa) errors for AMSA (blue), AMHS (green), and mean pairwise differences AMHS − AMSA (black) with 95% confidence intervals with respect to forecast lead time. Filled pairwise difference markers indicate statistically significant differences favoring AMSA (green) and AMSA (blue).
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
To explore the moisture structure differences noted previously in a more TC-centric manner and to give a possible explanation for the improved intensity forecasts, Fig. 16 shows an aggregated cross section from 000000 UTC 11 August to 1200 UTC 16 August (12 initializations) of the AMHS minus AMSA specific humidity field along 19°N at the analysis time. This corresponds to Tropical Storm Fay during the tropical wave and depression stages and follows the general latitude of the TC track (Fig. 13) and is representative of the average differences between the two configurations for the other TCs. The AMHS configuration generally has less moisture than the AMSA for all levels above 850 hPa at all longitudes, including areas nearest to Fay (west of 50°W). In this case negative differences imply the AMHS experiment is closer to the reanalysis and observational data because both simulations are moister than the actual best estimates in this region (Table 2, Figs. 8–12) by more than the minimum negative differences in Fig. 16.
Vertical cross section of mean specific humidity (g kg−1) differences (AMHS − AMSA) aggregated over 12 forecast initialization times between 0000 UTC 11 Aug and 1200 UTC 16 Aug along 19°N.
Citation: Weather and Forecasting 30, 4; 10.1175/WAF-D-14-00091.1
Admittedly, these experiments are performed at a coarse resolution for intensity and SLP verification, but these results show promise. The additional assimilation of MHS data provides no statistically significant track forecast degradation with further intensity forecast improvements over the AMSA configuration. Furthermore, the improvements in midlevel moisture seen in the AMHS simulation (Fig. 16) may have larger impacts in convective-permitting simulations (e.g., simulations without a convective parameterization and grid spacing less than 5 km), where the convection is explicitly simulated, allowing for direct coupling of the large-scale moisture field to convection (e.g., entrainment), rather than through a convective parameterization.
5. Summary
This work builds on previous studies (Schwartz et al. 2012; L12) to assimilate satellite radiances in a limited-area EnKF system. L12 previously established improvements from the AMSU-A radiance assimilation; therefore, the AMSU-A radiance assimilation experiment (AMSA) is used as a baseline to test the impacts of assimilating MHS (AMHS) radiances for this study. Aggregated forecast verification statistics at specific lead times and levels compared to ERA-I show statistically and practically significant improvements for specific humidity bias and RMSE simulations in the midlevels in the AMHS configuration (Figs. 5–7). Spatial comparisons to the ERA-I and NVAP-M datasets show both configurations are generally too moist, but the AMHS configuration has reduced areal bias and RMSE statistics (Table 2, Figs. 8–12). Additionally, AMHS has statistically significant improvements over large regions of the domain at 700 hPa for the analysis and 48-h forecast (Fig. 9).
The computational domain and time period contained five tropical cyclones, which were the focus of L12. The AMHS experiment did not show a consistent signal of additional improvement or degradation over the AMSA results for hurricane track forecasts, which is significant in itself, since the AMSA configuration produced statistically significant improvement over forecasts without AMSU-A assimilation. There is statistically significant improvement in the absolute intensity forecasts for the 18- and 24-h lead times with general nonstatistically significant improvements for all lead times out to 54 h. When examining raw errors, or forecast bias, the AMHS configuration has raw intensity improvement out to 54 h. However, there is degradation in absolute minimum SLP forecasts at the longest lead times, which is still present from 42 to 54 h for the raw minimum SLP errors. The raw minimum SLP errors revealed statistically significant improvements favoring AMHS out to 36 h. The generally improved intensity forecasts (Figs. 14b and 15a), coincident with mixed forecast improvements and degradations in SLP forecasts (Figs. 14c and 15b), likely relates to the coarse resolution of the simulations requiring a convective parameterization and only partially resolving the inner core of the TCs. The improved midlevel moisture field (Figs. 8–12 and 16) in the AMHS configuration suggests further improvements are possible at convective-permitting resolutions, where the grid-scale atmospheric structure is directly coupled to grid-scale convection.
These results show that significant improvements in simulated moisture and improvements in TC intensity forecasts in limited-area models are possible from the assimilation of MHS radiances; however, there are several areas of potential further exploration and improvement. First, lesser quality data passing QC due to a potentially less effective QC algorithm for MHS assimilated brightness temperatures (Yan et al. 2010; Guan et al. 2011; Zou et al. 2013) generally resulted in slight analysis and forecast degradation in temperature and to a lesser extent wind compared to the AMSA configuration (Figs. 6 and 7). This suggests that improved quality control procedures will likely be needed when assimilating MHS radiances in regional domains to realize further positive impacts, as seen in global models (Yan et al. 2010) and with other regional DA systems (Zou et al. 2013). Sensitivity tests examining the impacts of the observation localization function in the EnKF system could also be explored. In a broader sense, these results suggest simulations at convective-permitting resolutions should be performed to more fully explore the impact of improved midlevel moisture and column precipitable water (Figs. 11 and 16) on tropical convection and TC genesis, structure, and size.
Acknowledgments
This work has been performed under the auspices of the Developmental Testbed Center. The Developmental Testbed Center is funded by the National Oceanic and Atmospheric Administration, the U.S. Air Force, and the National Center for Atmospheric Research. This work was supported by the U.S. Air Force. We thank the anonymous reviewers for their comments, which helped to improve the quality of the final submission.
REFERENCES
Anderson, J. L., 2001: An ensemble adjustment Kalman filter for data assimilation. Mon. Wea. Rev., 129, 2884–2903, doi:10.1175/1520-0493(2001)129<2884:AEAKFF>2.0.CO;2.
Anderson, J. L., 2003: A local least squares framework for ensemble filtering. Mon. Wea. Rev., 131, 634–642, doi:10.1175/1520-0493(2003)131<0634:ALLSFF>2.0.CO;2.
Anderson, J. L., Hoar T. , Raeder K. , Liu H. , Collins N. , Torn R. , and Arellano A. , 2009: The Data Assimilation Research Testbed: A community facility. Bull. Amer. Meteor. Soc., 90, 1283–1296, doi:10.1175/2009BAMS2618.1.
Andersson, E., Hollingsworth A. , Kelly G. , Lönnberg P. , Pailleux J. , and Zhang Z. , 1991: Global observing system experiments on operational statistical retrievals of satellite sounding data. Mon. Wea. Rev., 119, 1851–1865, doi:10.1175/1520-0493(1991)119<1851:GOSEOO>2.0.CO;2.
Aravéquia, J. A., Szunyogh I. , Fertig E. J. , Kalnay E. , Kuhl D. , and Kostelich E. J. , 2011: Evaluation of a strategy for the assimilation of satellite radiance observations with the local ensemble transform Kalman filter. Mon. Wea. Rev., 139, 1932–1951, doi:10.1175/2010MWR3515.1.
Auligné, T., McNally A. P. , and Dee D. P. , 2007: Adaptive bias correction for satellite data in a numerical weather prediction system. Quart. J. Roy. Meteor. Soc.,133, 631–642, doi:10.1002/qj.56.
Barker, D., and Coauthors, 2012: The Weather Research and Forecasting Model’s Community Variational/Ensemble Data Assimilation System: WRFDA. Bull. Amer. Meteor. Soc., 93, 831–843, doi:10.1175/BAMS-D-11-00167.1.
Bouttier, F., and Kelly G. , 2001: Observing-system experiments in the ECMWF 4D-Var data assimilation system. Quart. J. Roy. Meteor. Soc., 127, 1469–1488, doi:10.1002/qj.49712757419.
Brown, B. G., Gotway J. H. , Bullock R. , Gilleland E. , and Ahijevych D. , 2009: The Model Evaluation Tools (MET): Community tools for forecast evaluation. Preprints, 25th Conf. on Interactive Information and Processing Systems (IIPS) for Meteorology, Oceanography, and Hydrology, Phoenix, AZ, Amer. Meteor. Soc., 9A.6. [Available online at https://ams.confex.com/ams/pdfpapers/151349.pdf.]
Buehner, M., Houtekamer P. L. , Charette C. , Mitchell H. L. , and He B. , 2010a: Intercomparison of variational data assimilation and the ensemble Kalman filter for global deterministic NWP. Part I: Description and single-observation experiments. Mon. Wea. Rev., 138, 1550–1566, doi:10.1175/2009MWR3157.1.
Buehner, M., Houtekamer P. L. , Charette C. , Mitchell H. L. , and He B. , 2010b: Intercomparison of variational data assimilation and the ensemble Kalman filter for global deterministic NWP. Part II: One-month experiments with real observations. Mon. Wea. Rev., 138, 1567–1586, doi:10.1175/2009MWR3158.1.
Burgers, G., van Leeuwen P. J. , and Evensen G. , 1998: Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev., 126, 1719–1724, doi:10.1175/1520-0493(1998)126<1719:ASITEK>2.0.CO;2.
Cardinali, C., 2009: Monitoring the observation impact on the short-range forecast. Quart. J. Roy. Meteor. Soc., 135, 239–250, doi:10.1002/qj.366.
Casati, B., and Coauthors, 2008: Forecast verification: Current status and future directions. Meteor. Appl., 15, 3–18, doi:10.1002/met.52.
Cavallo, S. M., Dudhia J. , and Snyder C. , 2011: A multilayer upper-boundary condition for longwave radiative flux to correct temperature biases in a mesoscale model. Mon. Wea. Rev., 139, 1952–1959, doi:10.1175/2010MWR3513.1.
Chen, F., and Dudhia J. , 2001: Coupling an advanced land surface–hydrology model with the Penn State–NCAR MM5 modeling system. Part I: Model description and implementation. Mon. Wea. Rev., 129, 569–585, doi:10.1175/1520-0493(2001)129<0569:CAALSH>2.0.CO;2.
Chen, Y., and Snyder C. , 2007: Assimilating vortex position with an ensemble Kalman filter. Mon. Wea. Rev., 135, 1828–1845, doi:10.1175/MWR3351.1.
Chou, M.-D., and Suarez M. J. , 1994: An efficient thermal infrared radiation parameterization for use in general circulation models. NASA Tech. Memo. 104606, Vol. 3, 85 pp.
Collard, A., Hilton F. , Forsythe M. , and Candy B. , 2011: From Observations to Forecasts—Part 8: The use of satellite observations in numerical weather prediction. Weather, 66 (2), 31–36, doi:10.1002/wea.736.
Davis, G., 2007: History of the NOAA satellite program. J. Appl. Remote Sens., 1, 012504, doi:10.1117/1.2642347.
Dee, D. P., 2005: Bias and data assimilation. Quart. J. Roy. Meteor. Soc., 131, 3323–3343, doi:10.1256/qj.05.137.
Dee, D. P., and Coauthors, 2011: The ERA-Interim reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteor. Soc., 137, 553–597, doi:10.1002/qj.828.
Derber, J. C., and Wu W.-S. , 1998: The use of TOVS cloud-cleared radiances in the NCEP SSI analysis system. Mon. Wea. Rev., 126, 2287–2299, doi:10.1175/1520-0493(1998)126<2287:TUOTCC>2.0.CO;2.
Developmental Testbed Center, 2014: Model Evaluation Tools–Tropical cyclone user’s guide. Developmental Testbed Center, accessed 28 July 2014, 29 pp. [Available online at http://www.dtcenter.org/met/users/docs/users_guide/MET-TC_Users_Guide_v4.1.pdf.]
Dunion, J. P., and Velden C. S. , 2004: The impact of the Saharan air layer on Atlantic tropical cyclone activity. Bull. Amer. Meteor. Soc., 85, 353–365, doi:10.1175/BAMS-85-3-353.
Ebert, E., and Coauthors, 2013: Progress and challenges in forecast verification. Meteor. Appl., 20, 130–130, doi:10.1002/met.1392.
Evensen, G., 1994: Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J. Geophys. Res., 99, 10 143–10 162, doi:10.1029/94JC00572.
Fiorino, M., 2009: Record-setting performance of the ECMWF IFS in medium-range tropical cyclone track prediction. ECMWF Newsletter, No. 118, Reading, United Kingdom, 20–27.
Guan, L., Zou X. , Weng F. , and Li G. , 2011: Assessments of FY-3A microwave humidity sounder measurements using NOAA-18 microwave humidity sounder. J. Geophys. Res., 116, D10106, doi:10.1029/2010JD015412.
Hamill, T. M., 1999: Hypothesis tests for evaluating numerical precipitation forecasts. Wea. Forecasting, 14, 155–167, doi:10.1175/1520-0434(1999)014<0155:HTFENP>2.0.CO;2.
Hamill, T. M., Whitaker J. S. , Fiorino M. , and Benjamin S. G. , 2011: Global ensemble predictions of 2009’s tropical cyclones initialized with an ensemble Kalman filter. Mon. Wea. Rev., 139, 668–688, doi:10.1175/2010MWR3456.1.
Han, Y., van Delst P. , Liu Q. , Weng F. , Yan B. , Treadon R. , and Derber J. , 2006: JCSDA Community Radiative Transfer Model (CRTM)—Version 1. NOAA Tech. Rep. NESDIS 122, 33 pp.
Hill, K. A., and Lackmann G. M. , 2009: Influence of environmental humidity on tropical cyclone size. Mon. Wea. Rev., 137, 3294–3315, doi:10.1175/2009MWR2679.1.
Holloway, C. E., and Neelin J. D. , 2009: Moisture vertical structure, column water vapor, and tropical deep convection. J. Atmos. Sci., 66, 1665–1683, doi:10.1175/2008JAS2806.1.
Hong, S.-Y., Dudhia J. , and Chen S.-H. , 2004: A revised approach to ice microphysical processes for the bulk parameterization of clouds and precipitation. Mon. Wea. Rev., 132, 103–120, doi:10.1175/1520-0493(2004)132<0103:ARATIM>2.0.CO;2.
Hong, S.-Y., Noh Y. , and Dudhia J. , 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 2318–2341, doi:10.1175/MWR3199.1.
Houtekamer, P. L., and Mitchell H. L. , 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev., 126, 796–811, doi:10.1175/1520-0493(1998)126<0796:DAUAEK>2.0.CO;2.
Houtekamer, P. L., Mitchell H. L. , Pellerin G. , Buehner M. , Charron M. , Spacek L. , and Hansen B. , 2005: Atmospheric data assimilation with an ensemble Kalman filter: Results with real observations. Mon. Wea. Rev., 133, 604–620, doi:10.1175/MWR-2864.1.
Janoušek, M., Simmons A. J. , and Richardson D. , 2012: Plots of the long-term evolution of operational forecast skill updated. ECMWF Newsletter, No. 132, ECMWF, Reading, United Kingdom, 11–12.
Joo, S., Eyre J. , and Marriott R. , 2013: The impact of MetOp and other satellite data within the Met Office Global NWP system using an adjoint-based sensitivity method. Mon. Wea. Rev., 141, 3331–3342, doi:10.1175/MWR-D-12-00232.1.
Kain, J. S., and Fritsch J. M. , 1990: A one-dimensional entraining/detraining plume model and its application in convective parameterization. J. Atmos. Sci., 47, 2784–2802, doi:10.1175/1520-0469(1990)047<2784:AODEPM>2.0.CO;2.
Liu, H., Anderson J. , Kuo Y.-H. , and Raeder K. , 2007: Importance of forecast error multivariate correlations in idealized assimilation of GPS radio occultation data with the ensemble adjustment filter. Mon. Wea. Rev., 135, 173–185, doi:10.1175/MWR3270.1.
Liu, Q., and Weng F. , 2006: Advanced doubling-adding method for radiative transfer in planetary atmosphere. J. Atmos. Sci., 63, 3459–3465, doi:10.1175/JAS3808.1.
Liu, Z., Schwartz C. S. , Snyder C. , and Ha S. Y. , 2012: Impact of assimilating AMSU-A radiances on forecasts of 2008 Atlantic tropical cyclones initialized with a limited-area ensemble Kalman filter. Mon. Wea. Rev., 140, 4017–4034, doi:10.1175/MWR-D-12-00083.1.
Lorenz, C., and Kunstmann J. , 2012: The hydrological cycle in three state-of-the-art reanalyses: Intercomparison and performance analysis. J. Hydrometeor., 13, 1397–1420, doi:10.1175/JHM-D-11-088.1.
Meng, Z., and Zhang F. , 2008a: Tests of an ensemble Kalman filter for mesoscale and regional-scale data assimilation. Part III: Comparison with 3DVAR in a real-data case study. Mon. Wea. Rev., 136, 522–540, doi:10.1175/2007MWR2106.1.
Meng, Z., and Zhang F. , 2008b: Test of an ensemble Kalman filter for mesoscale and regional-scale data assimilation. Part IV: Comparison with 3DVAR in a month-long experiment. Mon. Wea. Rev., 136, 3671–3682, doi:10.1175/2008MWR2270.1.
Miyoshi, T., and Sato Y. , 2007: Assimilating satellite radiances with a local ensemble transform Kalman filter (LETKF) applied to the JMA global model (GSM). SOLA, 3, 37–40, doi:10.2151/sola.2007-010.
Miyoshi, T., Sato Y. , and Kadowaki T. , 2010: Ensemble Kalman filter and 4D-Var intercomparison with the Japanese operational global analysis and prediction system. Mon. Wea. Rev., 138, 2846–2866, doi:10.1175/2010MWR3209.1.
Mlawer, E. J., Taubman S. J. , Brown P. D. , Iacono M. J. , and Clough S. A. , 1997: Radiative transfer for inhomogeneous atmosphere: RRTM, a validated correlated-k model for the long- wave. J. Geophys. Res., 102, 16 663–16 682, doi:10.1029/97JD00237.
Mo, K. C., Wang X. L. , Kistler R. , Kanamitsu M. , and Kalnay E. , 1995: Impact of satellite data on the CDAS-Reanalysis system. Mon. Wea. Rev., 123, 124–139, doi:10.1175/1520-0493(1995)123<0124:IOSDAT>2.0.CO;2.
NOAA, 2014: NOAA KLM user’s guide. NOAA/NASA, accessed 25 July 2014. [Available online at http://www.ncdc.noaa.gov/oa/pod-guide/ncdc/docs/klm/index.htm.]
Schwartz, C. S., Liu Z. , Chen Y. , and Huang X.-Y. , 2012: Impact of assimilating microwave radiances with a limited-area ensemble data assimilation system on forecasts of Typhoon Morakot. Wea. Forecasting, 27, 424–437, doi:10.1175/WAF-D-11-00033.1.
Simmons, A. J., and Hollingsworth A. , 2002: Some aspects of the improvement in skill of numerical weather prediction. Quart. J. Roy. Meteor. Soc., 128, 647–677, doi:10.1256/003590002321042135.
Skamarock, W. C., and Coauthors, 2008: A description of the Advanced Research WRF, version 3. NCAR Tech. Note NCAR/TN–475+STR, 113 pp. [Available online at http://www2.mmm.ucar.edu/wrf/users/docs/arw_v3.pdf.]
Szunyogh, I., Kostelich E. J. , Gyarmati G. , Kalnay E. , Hunt B. R. , Ott E. , Satterfield E. , and Yorke J. A. , 2008: A local ensemble transform Kalman filter data assimilation system for the NCEP global model. Tellus, 60, 113–130, doi:10.1111/j.1600-0870.2007.00274.x.
Tompkins, A. M., 2001: Organization of tropical convection in low vertical wind shears: The role of water vapor. J. Atmos. Sci., 58, 529–545, doi:10.1175/1520-0469(2001)058<0529:OOTCIL>2.0.CO;2.
Torn, R. D., 2010: Performance of a mesoscale ensemble Kalman filter (EnKF) during the NOAA high-resolution hurricane test. Mon. Wea. Rev., 138, 4375–4392, doi:10.1175/2010MWR3361.1.
Torn, R. D., and Hakim G. J. , 2008: Performance characteristics of a pseudo-operational ensemble Kalman filter. Mon. Wea. Rev., 136, 3947–3963, doi:10.1175/2008MWR2443.1.
Torn, R. D., Hakim G. J. , and Snyder C. , 2006: Boundary conditions for limited-area ensemble Kalman filters. Mon. Wea. Rev., 134, 2490–2502, doi:10.1175/MWR3187.1.
Trenberth, K., and Fasullo J. T. , 2013: Regional energy and water cycles: Transports from ocean to land. J. Climate, 26, 7837–7851, doi:10.1175/JCLI-D-13-00008.1.
Trenberth, K., Fasullo J. T. , and Mackaro J. , 2011: Atmospheric moisture transports from ocean to land and global energy flows in reanalyses. J. Climate, 24, 4907–4924, doi:10.1175/2011JCLI4171.1.
United Nations Economic Commission for Europe, 2010: Making data meaningful. Part 2: A guide to presenting statistics. UNECE, 58 pp. [Available online at http://www.unece.org/fileadmin/DAM/stats/documents/writing/MDM_Part2_English.pdf.]
Vonder Haar, T. H., Bytheway J. L. , and Forsythe J. M. , 2012: Weather and climate analyses using improved global water vapor observations. Geophys. Res. Lett., 39, L15802, doi:10.1029/2012GL052094.
Warner, T. T., Peterson R. A. , and Treadon R. E. , 1997: A tutorial on lateral boundary conditions as a basic and potentially serious limitation to regional numerical weather prediction. Bull. Amer. Meteor. Soc., 78, 2599–2617, doi:10.1175/1520-0477(1997)078<2599:ATOLBC>2.0.CO;2.
Whitaker, J. S., Hamill T. M. , Wei X. , Song Y. , and Toth Z. , 2008: Ensemble data assimilation with the NCEP Global Forecast System. Mon. Wea. Rev., 136, 463–482, doi:10.1175/2007MWR2018.1.
WMO, 2010: Guide to Meteorological Instruments and Methods of Observation. WMO. [Available online at http://library.wmo.int/pmb_ged/wmo_8_en-2012.pdf.]
Xu, J., Rugg S. , Byerle L. , and Liu Z. , 2009: Weather forecasts by the WRF-ARW Model with the GSI data assimilation system in the complex terrain areas of southwest Asia. Wea. Forecasting, 24, 987–1008, doi:10.1175/2009WAF2222229.1.
Yan, B., Weng F. , and Derber J. , 2010: Assimilation of satellite microwave water vapor sounding channel data in NCEP Global Forecast System (GFS). 17th Int. TOVS Study Conf., Monterrey, CA, International ATOVS Working Group.
Yang, F., 2014: Historical performances of global NWP models. NCEP/EMC/Global Climate and Weather Modeling Branch, accessed 6 April 2014. [Available online at http://www.emc.ncep.noaa.gov/gmb/STATS_vsdb/longterm/.]
Zapotocny, T. H., Menzel W. P. , Jung J. A. , and Nelson J. P. III, 2005: A four-season impact study of rawinsonde, GOES, and POES data in the Eta Data Assimilation System. Part II: Contribution of the components. Wea. Forecasting, 20, 178–198, doi:10.1175/WAF838.1.
Zapotocny, T. H., Jung J. A. , Le Marshall J. F. , and Treadon R. E. , 2007: A two-season impact study of satellite and in situ data in the NCEP Global Data Assimilation System. Wea. Forecasting, 22, 887–909, doi:10.1175/WAF1025.1.
Zhang, F., Zhang M. , and Poterjoy J. , 2013: E3DVar: Coupling an ensemble Kalman filter with three-dimensional variational data assimilation in a limited-area weather prediction model and comparison to E4DVar. Mon. Wea. Rev., 141, 900–917, doi:10.1175/MWR-D-12-00075.1.
Zhang, M., Zhang F. , Huang X.-Y. , and Zhang X. , 2011: Intercomparison of an ensemble Kalman filter with three- and four-dimensional variational data assimilation methods in a limited-area model over the month of June 2003. Mon. Wea. Rev., 139, 566–572, doi:10.1175/2010MWR3610.1.
Zou, X., Qin Z. , and Weng F. , 2013: Improved quantitative precipitation forecasts by MHS radiance data assimilation with a newly added cloud detection algorithm. Mon. Wea. Rev., 141, 3203–3221, doi:10.1175/MWR-D-13-00009.1.