• Arnold, C. P., Jr., and C. H. Dey, 1986: Observing-systems simulation experiments: Past, present, and future. Bull. Amer. Meteor. Soc., 67, 687695, doi:10.1175/1520-0477(1986)067<0687:OSSEPP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., and et al. , 2004a: An hourly assimilation–forecast cycle: The RUC. Mon. Wea. Rev., 132, 495518, doi:10.1175/1520-0493(2004)132<0495:AHACTR>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., B. E. Schwartz, E. J. Szoke, and S. E. Koch, 2004b: The value of wind profiler data in U.S. weather forecasting. Bull. Amer. Meteor. Soc., 85, 18711886, doi:10.1175/BAMS-85-12-1871.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., S. Weygandt, D. Devenyi, J. M. Brown, G. Manikin, T. L. Smith, and T. Smirnova, 2004c: Improved moisture and PBL initialization in the RUC using METAR data. 22nd Conf. on Severe Local Storms, Hyannis, MA, Amer. Meteor. Soc., 17.3. [Available online at https://ams.confex.com/ams/11aram22sls/techprogram/paper_82023.htm.]

  • Benjamin, S. G., B. D. Jamison, W. R. Moninger, S. R. Sahm, B. Schwartz, and T. W. Schlatter, 2010: Relative short-range forecast impact from aircraft, profiler, radiosonde, VAD, GPS-PW, METAR, and mesonet observations via the RUC hourly assimilation cycle. Mon. Wea. Rev., 138, 13191343, doi:10.1175/2009MWR3097.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., and et al. , 2016: A North American hourly assimilation and model forecast cycle: The Rapid Refresh. Mon. Wea. Rev., 144, 16691694, doi:10.1175/MWR-D-15-0242.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bresky, W. C., J. M. Daniels, A. A. Bailey, and S. T. Wanzong, 2012: New methods toward minimizing the slow speed bias associated with atmospheric motion vectors. J. Appl. Meteor. Climatol., 51, 21372151, doi:10.1175/JAMC-D-11-0234.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cardinali, C., 2009: Monitoring the observation impact on the short-range forecast. Quart. J. Roy. Meteor. Soc., 135, 239258, doi:10.1002/qj.366.

  • Côté, J., M. M. Roch, A. Staniforth, and L. Fillion, 1993: A variable-resolution semi-Lagrangian finite-element global model of the shallow water equations, Mon. Wea. Rev., 121, 231243, doi:10.1175/1520-0493(1993)121<0231:AVRSLF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Daniels, T. S., W. R. Moninger, and R. D. Mamrosh, 2006: Tropospheric Airborne Meteorological Data Reporting (TAMDAR) overview. 10th Symp. on Integrated Observing and Assimilation Systems for Atmosphere, Oceans, and Land Surface, Atlanta, GA, Amer. Meteor. Soc., 9.1. [Available online at https://ams.confex.com/ams/Annual2006/techprogram/paper_104773.htm.]

  • Gutman, S. I., and S. G. Benjamin, 2001: The role of ground-based GPS meteorological observations in numerical weather prediction. GPS Solutions, 4, 1624, doi:10.1007/PL00012860.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hollingsworth, A., P. Lonnberg, L. Illari, K. Arpe, and A. J. Simmons, 1986: Monitoring of observation and analysis quality by a data assimilation system. Mon. Wea. Rev., 114, 861879, doi:10.1175/1520-0493(1986)114<0861:MOOAAQ>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, X.-Y., and P. Lynch, 1993: Diabatic digital-filtering initialization: Application to the HIRLAM model. Mon. Wea. Rev., 121, 589603, doi:10.1175/1520-0493(1993)121<0589:DDFIAT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ingleby, B., 2015: Global assimilation of air temperature, humidity, wind and pressure from surface stations. Quart. J. Roy. Meteor. Soc., 141, 504517, doi:10.1002/qj.2372.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kalnay, E., S. J. Lord, and R. D. McPherson, 1998: Maturity of operational numerical weather prediction: Medium range. Bull. Amer. Meteor. Soc., 79, 27532769, doi:10.1175/1520-0477(1998)079<2753:MOONWP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kelleher, K. E., and et al. , 2007: Project CRAFT: A real-time delivery system for NEXRAD level II data via the Internet. Bull. Amer. Meteor. Soc., 88, 10451057, doi:10.1175/BAMS-88-7-1045.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., D. F. Parrish, J. C. Derber, R. Treadon, W.-S. Wu, and S. Lord, 2009: Introduction of the GSI into the NCEP Global Data Assimilation System. Wea. Forecasting, 24, 16911705, doi:10.1175/2009WAF2222201.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., J. Zhang, and K. Howard, 2010: A technique to censor biological echoes in radar reflectivity. J. Appl. Meteor. Climatol., 49, 453462, doi:10.1175/2009JAMC2255.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Langland, R. H., and N. L. Baker, 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56, 189201, doi:10.3402/tellusa.v56i3.14413.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Langland, R. H., and et al. , 2016: Forecast Sensitivity–Observation Impact (FSOI) Inter-comparison Experiment. Third Int. Winds Workshop, Monterey, CA, NOAA–EUMETSAT–WMO. [Available online at http://cimss.ssec.wisc.edu/iwwg/iww13/talks/01_Monday/1650_IWW13_NRL_FSOI_Langland.pdf.]

  • Le Marshall, J., and et al. , 2007: The Joint Center for Satellite Data Assimilation. Bull. Amer. Meteor. Soc., 88, 329340, doi:10.1175/BAMS-88-3-329.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lhermitte, R. M., and D. Atlas, 1960: Precipitation motion by pulse Doppler. Preprints, Ninth Weather Radar Conf., Kansas City, MO, Amer. Meteor. Soc., 218–223.

  • Lin, H., S. S. Weygandt, S. G. Benjamin, and M. Hu, 2017: Satellite radiance data assimilation within the hourly updated Rapid Refresh. Wea. Forecasting, doi:10.1175/WAF-D-16-0215.1, in press.

    • Search Google Scholar
    • Export Citation
  • Lupu, C., C. Cardinali, and A. P. McNally, 2015: Adjoint-based forecast sensitivity applied to observation-error variance tuning. Quart. J. Roy. Meteor. Soc., 141, 31573165, doi:10.1002/qj.2599.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McMurdie, L., and C. Mass, 2004: Major numerical forecast failures over the northeast Pacific. Wea. Forecasting, 19, 338356, doi:10.1175/1520-0434(2004)019<0338:MNFFOT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Minnis, P., and et al. , 2008: Near-real time cloud retrievals from operational and research meteorological satellites. Remote Sensing of Clouds and the Atmosphere XIII, R. H. Picard et al., Eds., International Society for Optical Engineering (SPIE Proceedings, Vol. 7107-2), 710703, doi:10.1117/12.800344.

    • Crossref
    • Export Citation
  • Minnis, P., and et al. , 2011: CERES edition-2 cloud property retrievals using TRMM VIRS and Terra and Aqua MODIS data—Part I: Algorithms. IEEE Trans. Geosci. Remote Sens., 49, 43744400, doi:10.1109/TGRS.2011.2144601.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moninger, W. R., R. D. Mamrosh, and P. M. Pauley, 2003: Automated meteorological reports from commercial aircraft. Bull. Amer. Meteor. Soc., 84, 203216, doi:10.1175/BAMS-84-2-203.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moninger, W. R., S. G. Benjamin, B. D. Jamison, T. W. Schlatter, T. L. Smith, and E. J. Szoke, 2010: Evaluation of regional aircraft observations using TAMDAR. Wea. Forecasting, 25, 627645, doi:10.1175/2009WAF2222321.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • NOAA, 2013: NOAA research program overview: Sandy supplemental. NOAA Rep., 2 pp. [Available online at http://research.noaa.gov/sites/oar/Documents/oarProgramOverview_SandySupplemental_CC.pdf.]

  • Peckham, S. E., T. G. Smirnova, S. G. Benjamin, J. M. Brown, and J. S. Kenyon, 2016: Implementation of a digital filter initialization in the WRF Model and its application in the Rapid Refresh. Mon. Wea. Rev., 144, 99106, doi:10.1175/MWR-D-15-0219.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Petersen, R. A., 2016: On the impacts and benefits of AMDAR observations in operational forecasting. Part I: A review of the impacts of automated aircraft wind and temperature reports. Bull. Amer. Meteor. Soc., 97, 585602, doi:10.1175/BAMS-D-14-00055.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Petersen, R. A., L. Cronce, R. Mamrosh, R. Baker, and P. Pauley, 2016: On the impact and future benefits of AMDAR observations in operational forecasting. Part II: Water vapor observations. Bull. Amer. Meteor. Soc., 97, 21172133, doi:10.1175/BAMS-D-14-00211.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rogers, E., and et al. , 2009: The NCEP North American Mesoscale modeling system: Recent changes and future plans. 23rd Conf. on Weather Analysis and Forecasting/19th Conf. on Numerical Weather Prediction, Omaha, NE, Amer. Meteor. Soc., 2A4. [Available online at https://ams.confex.com/ams/23WAF19NWP/techprogram/paper_154114.htm.]

  • Ryzhkov, A., S. E. Giangrande, V. M. Melnikov, and T. J. Schuur, 2005: Calibration issues of dual-polarization radar measurements. J. Atmos. Oceanic Technol., 22, 11381155, doi:10.1175/JTECH1772.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Shao, H., and et al. , 2016: Bridging research to operations transitions: Status and plans of community GSI. Bull. Amer. Meteor. Soc., 97, 14271440, doi:10.1175/BAMS-D-13-00245.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Shapiro, M., and A. Thorpe, 2004: THORPEX international science plan, version 3. WMO/TD-1246, WWRP/THORPEX 2, 55 pp. [Available online at www.wmo.int/pages/prog/arep/wwrp/new/documents/CD_ROM_international_science_plan_v3.pdf.]

  • Skamarock, W. C., and et al. , 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., doi:10.5065/D68S4MVH.

    • Crossref
    • Export Citation
  • Smith, T. L., S. G. Benjamin, S. I. Gutman, and S. Sahm, 2007: Short-range forecast impact from assimilation of GPS-IPW observations into the Rapid Update Cycle. Mon. Wea. Rev., 135, 29142930, doi:10.1175/MWR3436.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tang, L., J. Zhang, C. Langston, J. Krause, K. Howard, and V. Lakshmanan, 2014: A physically based precipitation–nonprecipitation radar echo classifier using polarimetric and environmental data in a real-time national system. Wea. Forecasting, 29, 11061119, doi:10.1175/WAF-D-13-00072.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tollerud, E. I., and et al. , 2013: The DTC ensembles task: A new testing and evaluation facility for mesoscale ensembles. Bull. Amer. Meteor. Soc., 94, 321327, doi:10.1175/BAMS-D-11-00209.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Velden, C., and et al. , 2005: Recent innovations in deriving tropospheric winds from meteorological satellites. Bull. Amer. Meteor. Soc., 86, 205223, doi:10.1175/BAMS-86-2-205.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, X., D. Parrish, D. Kleist, and J. Whitaker, 2013: GSI 3DVar-based ensemble–variational hybrid data assimilation for NCEP Global Forecast System: Single-resolution experiments. Mon. Wea. Rev., 141, 40984117, doi:10.1175/MWR-D-12-00141.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weatherhead, E. C., and et al. , 1998: Factors affection the detection of trends: Statistical considerations and applications to environmental data. J. Geophys. Res., 103, 17 14917 161, doi:10.1029/98JD00995.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weckwerth, T. M., 2000: The effect of small-scale moisture variability on thunderstorm initiation. Mon. Wea. Rev., 128, 40174030, doi:10.1175/1520-0493(2000)129<4017:TEOSSM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weckwerth, T. M., and et al. , 2004: An overview of the International H2O Project (IHOP_2002) and some preliminary highlights. Bull. Amer. Meteor. Soc., 85, 253277, doi:10.1175/BAMS-85-2-253.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130, 19131924, doi:10.1175/1520-0493(2002)130<1913:EDAWPO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., T. M. Hamill, X. Wei, Y. Song, and Z. Toth, 2008: Ensemble data assimilation with the NCEP Global Forecast System. Mon. Wea. Rev., 136, 463482, doi:10.1175/2007MWR2018.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilczak, J. M., and et al. , 1995: Contamination of wind profiler data by migrating birds: Characteristics of corrupted data and potential solutions. J. Atmos. Oceanic Technol., 12, 449467, doi:10.1175/1520-0426(1995)012<0449:COWPDB>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wolfe, D. E., and S. I. Gutman, 2000: Developing an operational, surface-based, GPS, water vapor observing system for NOAA: Network design and results. J. Atmos. Oceanic Technol., 17, 426440, doi:10.1175/1520-0426(2000)017<0426:DAOSBG>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • WMO, 2016: USA AMDAR program—Smoothed monthly average of daily (aircraft) report totals. [Available online at https://www.wmo.int/pages/prog/www/GOS/ABO/data/statistics/aircraft_obs_cmc_mthly_ave_daily_reports_by_type.jpg.]

  • Wu, W.-S., R. J. Purser, and D. F. Parrish, 2002: Three-dimensional variational analysis with spatially inhomogeneous covariances. Mon. Wea. Rev., 130, 29052916, doi:10.1175/1520-0493(2002)130<2905:TDVAWS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhu, Y., and R. Gelaro, 2008: Observation sensitivity calculations using the adjoint of the Gridpoint Statistical Interpolation (GSI) analysis system. Mon. Wea. Rev., 136, 335351, doi:10.1175/MWR3525.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • View in gallery

    Map of North America showing the computational domains of RAP (white boundary) and HRRR (green boundary).

  • View in gallery

    Configuration of the RAP partial cycling. Black circles represent the RAP data assimilation cycle with a background supplied by a prior forecast (represented by the gray arrows). At 0300 UTC, a parallel “partial” cycle is initialized from GFS atmospheric fields but using the RUC LSM state. A background from this partial cycle is used for the data assimilation in the primary “full cycle” 6 h later, at 0900 UTC. The procedure is repeated during 1200–2300 UTC.

  • View in gallery

    Vertical profile of RAPv3 fit to rawinsonde observations during January–December 2015 for 6-h forecasts (blue), 0-h analyses (red), and the difference (6 h minus 0 h, black) for (a) wind, (b) temperature, and (c) RH. Rectangular boxes every 50 hPa contain values significant at the 95% level.

  • View in gallery

    Differences in wind RMS vector error (vs rawinsonde) between observation denial experiments listed in Table 4 and the control run for the 1000–100-hPa vector wind (m s−1) for the RAP (North America) domain (see Fig. 1). Results for each of the 10 observational denial experiments are coded with a different color [raob, navy blue; aircraft, pink; profiler, steel blue; radar reflectivity, purple; VAD, light blue; GPS PW, forest green; GOES satellite observations, light green; surface, red; mesonet, yellow (shown only for summer); satellite cloud-drift winds, sky blue]. Four adjacent bars are shown for each OSE for 3-, 6-, 9-, and 12-h forecasts. Results are shown for three seasons (Table 3): (a) summer, (b) spring transition, and (c) winter. Statistical uncertainties are indicated for each OSE by the narrow black vertical lines showing ±1 standard error from the mean impact.

  • View in gallery

    As in Fig. 4, but for temperature RMS error (K).

  • View in gallery

    As in Fig. 4, but for 1000–400-hPa RH RMS error (% from 0 to 100).

  • View in gallery

    Observation impact results integrated over all three seasons, similar to Figs. 46, showing (a) wind, (b) temperature, and (c) RH. The horizontal black dashed lines indicate the level of 25% forecast error reduction, as shown in Table 5.

  • View in gallery

    (a) Temporal and (b) vertical consistency for 6-h wind forecast error (vs raobs) for control (red) and no-aircraft (blue) experiments and the difference (black) for the July 2014 summer period verified over the entire RAP domain.

  • View in gallery

    As in Fig. 4, but for (a)–(c) summer, (d)–(f) spring, and (g)–(i) winter, and stratified by layer: (a),(d),(g) 1000–800, (b),(e),(h) 800–400, and (c),(f),(i) 400–100 hPa.

  • View in gallery

    Profile of 6-h forecast RH bias (%, 0–100) for control run (red) and no-GOES experiment (blue) for (a) summer and (b) winter, verified over the entire RAP domain. Differences are shown in black. Rectangular boxes containing values significant at the 95% level are also shown at each level.

  • View in gallery

    As in Fig. 9, but for temperature RMS error (K) and for summer only.

  • View in gallery

    As in Fig. 9, but for RH RMS error (% from 0 to 100) and for summer only.

  • View in gallery

    As in Fig. 7, but for summer only, and for the 1000–600-hPa layer (lower troposphere) results only, and for verification times (a)–(c) daytime (0000 UTC) and (d)–(f) nighttime (1200 UTC) for (a),(d) wind, (b),(e) temperature, and (c),(f) RH.

  • View in gallery

    As in Fig. 7, but for summer only and showing aircraft-specific experiment results (all aircraft, pink; cruising aircraft (above 350 hPa), orange; ascent–descent aircraft (below 350 hPa), burgundy; aircraft RH, aquamarine; aircraft temperature/RH, yellow). Results are for RMSEs of (a) 1000–100-hPa vector wind, (b) 1000–100-hPa temperature, and (c) 1000–400-hPa RH.

  • View in gallery

    Vertical forecast impact (similar to Fig. 8b) for 6-h RH forecasts valid at 0000 UTC over the CONUS from denial of aircraft observations overall (red) and denial of aircraft moisture observations only (blue). Results are from the July 2014 test period. Rectangular boxes containing values significant at the 95% level are also shown at each level.

  • View in gallery

    As in Fig. 14, but for 400–100-hPa (top level) vector wind RMSE.

  • View in gallery

    Differences in RMS error (vs rawinsondes) between surface observation assimilation experiments and control run for the (a) 1000–600-hPa vector wind (m s−1), (b) temperature (K), and (c) RH (%, 0–100) for the RAP (North America) domain (see Fig. 1) during the summer period, at 0000 UTC. Results for each of five surface assimilation experiments are coded with a different color (original surface OSE, red; deny all pseudo-observations, orange; use reduced pseudo-observation density, burgundy; apply 1200-m cloud-building height limit for METARs, yellow; combined reduced pseudo-observation density and 1200-m cloud-building height limit for METARs, purple). Four adjacent bars are shown for each experiment for 3-, 6-, 9-, and 12-h forecasts. Statistical uncertainties are indicated for each experiment by narrow black vertical lines showing ±1 standard error from the mean impact.

  • View in gallery

    As in Fig. 15, but showing the vertical forecast impact for 6-h forecasts of (a) vector wind (m s−1), (b) temperature (K), and (c) RH (%; 0–100) for the original no-surface experiment (blue) and the revised pseudo-observation (red) density configuration with a 1200-m height limit for METAR cloud building. Results are over the entire RAP domain for the summer test period. Rectangular boxes containing values significant at the 95% level are also shown at each level.

  • View in gallery

    As in Fig. 18, but showing the vertical forecast impact for the original no-AMV experiment (blue) and the experiment assimilating AMVs over land as well as over water (red).

  • View in gallery

    Average diurnal cycle of RAP 6-h forecast RMSEs vs METAR observations for (a),(d) temperature (K), (b),(e) dewpoint temperature (K), and (c),(f) vector wind (m s−1) for the (a)–(c) summer and (d)–(f) winter periods, verified over the HRRR domain (see Fig. 1). Shown are the no-surface experiment (green), the no-raobs experiment (black), the no GPS-PW experiment (blue), and the no-aircraft experiment (red).

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 420 420 26
PDF Downloads 315 315 17

Observation System Experiments with the Hourly Updating Rapid Refresh Model Using GSI Hybrid Ensemble–Variational Data Assimilation

View More View Less
  • 1 Cooperative Institute for Research in Environmental Sciences, University of Colorado, and NOAA/OAR/Earth System Research Laboratory/Global Systems Division, Boulder, Colorado
  • | 2 NOAA/OAR/Earth System Research Laboratory/Global Systems Division, Boulder, Colorado
© Get Permissions
Full access

Abstract

A set of observation system experiments (OSEs) over three seasons using the hourly updated Rapid Refresh (RAP) numerical weather prediction (NWP) assimilation–forecast system identifies the importance of the various components of the North American observing system for 3–12-h RAP forecasts. Aircraft observations emerge as the strongest-impact observation type for wind, relative humidity (RH), and temperature forecasts, permitting a 15%–30% reduction in 6-h forecast error in the troposphere and lower stratosphere. Major positive impacts are also seen from rawinsondes, GOES satellite cloud observations, and surface observations, with lesser but still significant impacts from GPS precipitable water (PW) observations, satellite atmospheric motion vectors (AMVs), and radar reflectivity observations. A separate experiment revealed that the aircraft-related RH forecast improvement was augmented by 50% due specifically to the addition of aircraft moisture observations. Additionally, observations from en route aircraft and those from ascending or descending aircraft contribute approximately equally to the overall forecast skill, with the strongest impacts in the respective layers of the observations. Initial results from these OSEs supported implementation of an improved assimilation configuration of boundary layer pseudoinnovations from surface observations, as well as allowing the assimilation of satellite AMVs over land. The breadth of these experiments over the three seasons suggests that observation impact results are applicable to general forecasting skill, not just classes of phenomena during limited time periods.

Denotes content that is immediately available upon publication as open access.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Eric James, eric.james@noaa.gov

Abstract

A set of observation system experiments (OSEs) over three seasons using the hourly updated Rapid Refresh (RAP) numerical weather prediction (NWP) assimilation–forecast system identifies the importance of the various components of the North American observing system for 3–12-h RAP forecasts. Aircraft observations emerge as the strongest-impact observation type for wind, relative humidity (RH), and temperature forecasts, permitting a 15%–30% reduction in 6-h forecast error in the troposphere and lower stratosphere. Major positive impacts are also seen from rawinsondes, GOES satellite cloud observations, and surface observations, with lesser but still significant impacts from GPS precipitable water (PW) observations, satellite atmospheric motion vectors (AMVs), and radar reflectivity observations. A separate experiment revealed that the aircraft-related RH forecast improvement was augmented by 50% due specifically to the addition of aircraft moisture observations. Additionally, observations from en route aircraft and those from ascending or descending aircraft contribute approximately equally to the overall forecast skill, with the strongest impacts in the respective layers of the observations. Initial results from these OSEs supported implementation of an improved assimilation configuration of boundary layer pseudoinnovations from surface observations, as well as allowing the assimilation of satellite AMVs over land. The breadth of these experiments over the three seasons suggests that observation impact results are applicable to general forecasting skill, not just classes of phenomena during limited time periods.

Denotes content that is immediately available upon publication as open access.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Eric James, eric.james@noaa.gov

1. Introduction

Estimates of current atmospheric conditions are based on observations, in situ and remotely sensed, from an always evolving range of platforms. Consistency with physical relationships for these estimates is added through a time-evolving model assimilation cycle in a numerical weather prediction (NWP) system. The volume of meteorological observations is constantly increasing both from remote sensing platforms [satellites (Le Marshall et al. 2007) and radar] and also from some in situ platforms such as commercial aircraft. The geographical and temporal distribution of observations has a major impact on the initialization of NWP systems. In regions with large data voids, such as the Southern Hemisphere and the global oceans, model forecast skill is significantly degraded (Shapiro and Thorpe 2004; McMurdie and Mass 2004) although decreasingly as data assimilation techniques improve (e.g., Cardinali 2009).

NWP models, both in the United States and internationally, are constantly improving in a variety of ways, including the recent ubiquitous increases in spatial resolution, the incorporation of more advanced data assimilation techniques, and improved physical parameterizations for subgrid-scale processes (Kalnay et al. 1998; Benjamin et al. 2016). However, all NWP systems remain fundamentally tied to the number and quality of the observations being assimilated (Hollingsworth et al. 1986). In the United States, 2012’s Hurricane Sandy disaster prompted a large investment in research and development related to improving forecasts; the largest fraction of this “Sandy Supplemental” funding is directed toward improving observing systems (NOAA 2013). Questions remain regarding the most cost-effective path toward improved forecasts: What balance of investment is needed between more observational coverage versus an investment in high-performance computing infrastructure for NWP versus improved data assimilation and model technique development? Which observing systems will best add forecast accuracy? Observation system experiments (OSEs) provide a means of gauging the relative impact of different existing observation types on NWP forecast skill (Arnold and Dey 1986). Although the recently developed forecast sensitivity approach to observation impact (FSOI) can also assess observation impacts (Langland and Baker 2004; Zhu and Gelaro 2008; Lupu et al. 2015), this technique requires the development of an adjoint model for the data assimilation (Langland et al. 2016), which does not yet exist for the Gridpoint Statistical Interpolation (GSI) analysis system used for RAP. The FSOI method is a promising avenue for assessing short-range observation impact, but was not used here.

In the modern heterogeneous observing system, most observation types are frequently updated (i.e., new observational data become available every ~1 h or less). Therefore, in order to make the best use of recent observations, there is a corresponding need to frequently update the analysis and forecast from an NWP system. In the United States, the operational rapidly updating NWP model was upgraded from the Rapid Update Cycle (RUC; Benjamin et al. 2004a) to the Rapid Refresh (RAP; Benjamin et al. 2016, hereafter B16) model in May 2012. Subsequent versions of the RAP were implemented in February 2014 and August 2016. This most recent version of the model [RAP version 3 (RAPv3)], including hybrid ensemble–variational assimilation (e.g., Wang et al. 2013), was tested in this study. OSEs were conducted for the previous RUC model (e.g., Benjamin et al. 2010, hereafter B10), but as yet no published studies have investigated the observation impacts within the RAP system. NWP data assimilation and model upgrades (such as the 2012, 2014, and 2016 upgrades to the RAP) necessitate new OSEs in order to determine the impact of observation types within a modified NWP system. The observation mix has also changed over time, including a significant increase in aircraft data over the United States in recent years (WMO 2016). In this study, we aim to determine the relative impact of many of the observation types assimilated within the RAP modeling system partly to guide possible decision-making regarding the expansion of networks. We want to know whether the relative importance of observation types has changed compared to earlier OSE studies related to rapidly updating NWP systems.

The next section details the RAP model configuration used for the multiseason OSEs, followed by a section describing the experimental design for the OSEs. We then present our results and end the paper with our key conclusions.

2. Rapid Refresh model configuration

This section provides an overview of the RAP NWP system (B16), including its data assimilation component, its model component, and details related to its real-time application within NOAA. The volume of observations being ingested with each hourly cycle is then briefly summarized, highlighting some of the more important observation types, and followed by a summary of the most important differences from the RUC system used for the B10 OSE study.

a. NWP system overview

The version of the RAP used in these experiments is RAPv3, run operationally at the National Centers for Environmental Prediction (NCEP) starting in August 2016 and as described in B16. This version uses version 3.6.1 of the community-supported Advanced Research version of the Weather Research and Forecasting Model (WRF-ARW; Skamarock et al. 2008). The RAPv3 uses a 954 × 835 model domain covering all of North America with a horizontal grid spacing of 13 km near the center of the domain (Fig. 1). The model computational grid is a rotated latitude–longitude grid, which reduces the stretching of the horizontal grid near the edges of the domain (Côté et al. 1993). The RAP has 50 hybrid vertical levels, with a model top at 10 hPa.

Fig. 1.
Fig. 1.

Map of North America showing the computational domains of RAP (white boundary) and HRRR (green boundary).

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

The data assimilation system used in the RAP is GSI, a community-supported data assimilation system (Kleist et al. 2009; Shao et al. 2016). The data assimilation is carried out in a hybrid three-dimensional ensemble–variational configuration (Wang et al. 2013), wherein the data assimilation background forecast covariances are a weighted average of the traditional 3DVAR covariances (Wu et al. 2002; Whitaker and Hamill 2002; Whitaker et al. 2008) and flow-dependent covariances derived from the 80-member Global Forecast System (GFS) data assimilation ensemble (25% variational and 75% ensemble in this version of RAP; more details are provided in Table 2 of B16). This hybrid ensemble–variational GSI-based approach for RAP (Hu et al. 2017, manuscript submitted to Mon. Wea. Rev.) results in improved forecasts (particularly for upper-level winds; B16) compared to the 3DVAR data assimilation carried out in the RUC model. After the application of the primary hybrid data assimilation, a nonvariational cloud and hydrometeor analysis is carried out based primarily upon satellite and surface-based ceilometer data (also described by B16).

After the data assimilation process, and before the WRF free-forecast integration, a diabatic digital filter initialization procedure (DFI; Peckham et al. 2016, Huang and Lynch 1993) is applied within WRF. This process acts to damp out the noise effects of spurious gravity waves during the first hour of the forecast. The diabatic portion of the filter introduces specified latent heating where available during the DFI period; this specified latent heating is derived from three-dimensional radar reflectivity observations and, to a much smaller extent, is augmented by lightning observations (B16). This procedure is a low-cost way of producing an initial field with existing mesoscale circulations related to precipitation processes. The RAP system uses a variety of physical parameterizations within the WRF framework, also described in detail in B16. To keep regional forecasts from drifting away from the truth, the RAP employs partial cycling (Rogers et al. 2009) to reintroduce the GFS atmospheric state twice a day, at 0900 and 2100 UTC. Figure 2 shows a schematic of the RAP partial cycling configuration; the GFS atmospheric state is “partially cycled” for 6 h before being injected into the RAP full cycle. Within the context of this paper, this cycling exerts a slight damping on observation impact by reintroducing large-scale fields from the global model and its own global data assimilation.

Fig. 2.
Fig. 2.

Configuration of the RAP partial cycling. Black circles represent the RAP data assimilation cycle with a background supplied by a prior forecast (represented by the gray arrows). At 0300 UTC, a parallel “partial” cycle is initialized from GFS atmospheric fields but using the RUC LSM state. A background from this partial cycle is used for the data assimilation in the primary “full cycle” 6 h later, at 0900 UTC. The procedure is repeated during 1200–2300 UTC.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

b. Hourly and subhourly observations

The RAP model assimilates a variable number of observations within each hourly cycle. Table 1 provides an approximate number of observations of various different types that are assimilated during each cycle; this table (taken from Table 4 in B16) may be directly compared with Table 1 in B10. It is apparent from Table 1 that RAP-assimilated observations come from rawinsonde, commercial aircraft, surface, satellite-related, and radar-related groupings. All of these observation types have unique limitations in their sampling. For instance, observations from commercial aircraft provide vertical structure data but with an irregular distribution with atmospheric profiles only near major airports, and upper-tropospheric coverage along common flight routes, at certain times of day. About ⅛ of aircraft reports over the RAP domain include moisture observations. More specific information on observation types available from aircraft is provided by Moninger et al. (2003), Petersen (2016), and Petersen et al. (2016). Twice-a-day rawinsondes provide additional important vertical structure information but only over land. Surface observations are 2D and subject to local variations and even terrain elevation mismatches between observations and background; this is addressed with adjustment as described by B16 (local variable lapse rate) and Ingleby (2015) (fixed lapse rate).

Table 1.

Observational data used in RAPv3 as of Sep 2015: p is air pressure, qυ is water vapor mixing ratio, Tυ is virtual temperature, RH is relative humidity with respect to water, V refers to horizontal wind components, T is temperature, ps is surface pressure, Td is dewpoint, and PW is precipitable water. Table is from B16.

Table 1.

Observations from geostationary satellites represent an important source of both cloud and wind information for NWP data assimilation. Near-real-time cloud properties suitable for data assimilation are derived hourly from Geostationary Operational Environmental Satellites (GOES; Minnis et al. 2008) using the methods of Minnis et al. (2011). In addition, atmospheric motion vectors (AMVs) are derived from GOES and other satellite observations (Velden et al. 2005; Bresky et al. 2012); these observations provide important tropospheric wind information in otherwise data-sparse regions. Vertically integrated water vapor observations can be derived from global positioning system (GPS) receivers; an extensive network of GPS receivers around the world provides precipitable water (PW) observations for NWP data assimilation (Wolfe and Gutman 2000). These observations, referred to as GPS-met PW observations, have been shown to play an important role in NWP forecast skill through their ability to depict the distribution of tropospheric water vapor (Gutman and Benjamin 2001; Smith et al. 2007). Integrated PW is dominated by changes in low-level water vapor, which is critical for convective weather forecasting (e.g., Weckwerth 2000; Weckwerth et al. 2004). The impact of satellite radiances, also assimilated in the RAP, is relatively small but significant and is described in separate articles by Lin et al. (2017) and Lin et al. (2017, manuscript submitted to Wea. Forecasting).

Radar observations from the Weather Surveillance Radar-1988 Doppler (WSR-88D) Next Generation Weather Radar (NEXRAD) network provide critical information about the distribution of hydrometeors in the atmosphere. Development of a real-time distribution system for these large datasets (e.g., Kelleher et al. 2007) has permitted their use in NWP initialization. In addition to the radar reflectivity and radial velocity observations available from the WSR-88Ds, profiles of wind speed and direction can be derived using the velocity azimuth display (VAD; Lhermitte and Atlas 1960) technique. These wind profiles provide another dataset available for assimilation within NWP systems.

To analyze upcoming observation impact experiment results, it is important to identify the actual measurements made by different platforms as shown in Table 2. Aircraft, rawinsonde, and surface measurement platforms include separate instruments to determine wind, temperature, and relative humidity (RH) observations. Observation impact is across variables, not univariate; observations of a single variable can impact other variables through multivariate correlations in the data assimilation system.

Table 2.

Atmospheric observations measured by instrument platforms. Pressure and height pair indicates that the platform provides sufficient information to determine the mass field.

Table 2.

c. Major changes from the RUC system in 2007 to the RAP system in 2016

The version of the RAP NWP system used in this study (RAPv3; B16) is a substantially more advanced modeling system than the 2007 RUC system used for the OSEs presented by B10. In this section, we summarize some of the key observation, data assimilation, and model improvements (from the perspective of reducing forecast error) from the March 2007 RUC system to the 2016 RAPv3 system.

Regarding observations, in this study, radar reflectivity observations are now assimilated using latent heating specification during a preforecast application of forward–backward digital filter initialization (B16, section 2d). Satellite radiance observations are assimilated in RAP but were not in RUC (B16), and the volume of aircraft data has increased globally by a factor of 4–5 (dominated by a U.S. increase) since 2007 (WMO 2016). The NOAA Profiler Network (Benjamin et al. 2004b) with about 30 wind profilers was shut down in 2013; however, other multiagency profilers continue to operate in 2016. The data assimilation for hourly updated models has been significantly improved since 2007 (RUC 3DVAR then), now using hybrid ensemble–variational assimilation with GSI and improved assimilation of cloud, surface, and radar observations in RAPv3. Observation errors (OEs) remain essentially unchanged, with both the RUC and the RAP using an OE specification from the North American Mesoscale Forecast System (NAM) that is appropriate for a regional NWP system. The dynamical core for the RAP is WRF-ARW, while the physics parameterizations used in RAP are improved versions of those used in RUC and developed for short-range effectiveness.

The RAPv3 has a greatly expanded domain from the previous RUC domain, comprising approximately 4 times the RUC horizontal area (Fig. 1). Most of the domain expansion was toward the north and west, such that the 2016 RAP domain covers all of the Canadian Arctic region, as well as the entire Aleutian Islands chain. This domain expansion reduces the influence of the GFS boundary conditions and farther removes much of the CONUS from domain boundary influences.

3. Observation impact experimental design

For this study, observation impact experiments were configured similarly to B10. All of the assessments were conducted for three 10-day periods over three seasons (Table 3). The spring period (15–25 May 2013) featured several major severe weather outbreaks in the south-central United States, including significant tornadoes in Oklahoma on 19 and 20 May, and a high coverage of severe wind reports scattered throughout the southern and eastern CONUS during 21–22 May. Experiments were also carried out for summer and winter season periods (Table 3) to better assess the seasonal dependence of the observation impact and to obtain overall year-round estimates of impact.

Table 3.

Experimental periods for the RAP observation system experiments.

Table 3.

a. Control and data-denial experiments

To configure each of the multiseason OSEs (Table 3), a control RAP cycle with all available observations was first conducted. After the completion of this control experiment, we conducted up to 14 separate data-denial experiments (9 for spring and 13 for winter), excluding various observation types from the data assimilation process. Table 4 presents the names and descriptions of the experiments. All of the 10-day experiments are begun in the same way, using surface fields from the 1-h forecast from the 0200 UTC real-time RAP cycle, and atmospheric fields from the GFS, to initialize a 6-h partial cycle (see Fig. 2). From 0300 UTC on the first day through 2300 UTC on the final day, the data-denial experiments are configured identically to the control experiment except for the exclusion of certain observation types, with 12-h forecasts initialized every hour. Boundary conditions (from the GFS) are the same for all of the experiments. Note that our experiments did not include any OSEs on satellite radiance assimilation; the impact of these observations is discussed by Lin et al. (2017) and Lin et al. (2017, manuscript submitted to Wea. Forecasting).

Table 4.

RAP observation system experiments: A−J denial experiments for each observation type, K−N denial experiments for subsets of aircraft observations, and O−S additional sensitivity experiments on surface observation and satellite AMV assimilation.

Table 4.

During the course of executing the OSEs summarized in Table 4, several additional experiments were carried out in order to investigate in more detail the impact of assimilating surface observations and satellite AMVs (Table 4). These experiments, discussed in sections 6 and 7, led to the development of revised RAP configurations, which then were used as control experiments for examining the “no surface” and “no AMV” OSEs. Each of the experimental pair error differences shown herein is thus a controlled experiment, but modified control runs are used for calculating forecast degradation for the no-surface and no-AMV OSEs; for more details, the reader is referred to sections 6 and 7.

b. Verification

For all of the retrospective experiments, model forecasts were verified against conventional, twice-daily rawinsonde observations over the lower 48 United States (or CONUS) as well as over the entire RAP domain (Fig. 1). For each experiment, residuals (forecasts minus observations) were calculated for 3-, 6-, 9-, and 12-h forecasts at each rawinsonde location. The root-mean-square difference (RMSE; magnitude of the root-mean-square vector difference in the case of wind) between the forecasts and observations was then calculated for each 12-h rawinsonde verification time (0000 and 1200 UTC). All of the verification presented here uses observations and model forecasts interpolated to vertical resolution every 10 hPa from significant-level rawinsonde observations and native model data. For more detail on the verification procedure used here, the reader is referred to Moninger et al. (2010).

We estimated the forecast error reduction percentage for interpreting subsequent results in this study. Examining longer-term error statistics from the RAP (approximated as the difference in fit to rawinsonde observations between 6-h RAP forecasts and 0-h RAP analyses during 2015; Fig. 3) provides context for the forecast improvements described. This difference is used to estimate an approximate 25% reduction in forecast error for wind, temperature, and RH (Table 5). The analysis fit to rawinsonde observations is used as a proxy for truth, since that analysis (see B16) includes information on expected observation errors for rawinsondes and the other observation types also assimilated. From this 2015 result, we estimate 25% of the maximum possible improvement in 6-h forecast skill as 0.3 m s−1 for wind, 0.1 K for temperature, and 1% (on the 0–100 scale) for RH (included later in Fig. 7).

Fig. 3.
Fig. 3.

Vertical profile of RAPv3 fit to rawinsonde observations during January–December 2015 for 6-h forecasts (blue), 0-h analyses (red), and the difference (6 h minus 0 h, black) for (a) wind, (b) temperature, and (c) RH. Rectangular boxes every 50 hPa contain values significant at the 95% level.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

Table 5.

Normalizing percentage difference for 6-h forecast minus 0-h analysis for wind, temperature, and RH. Based on Benjamin et al. [(2004a), their Figs. 9 and 10 and Eq. (11)]. A simple level-independent approximation is made for all levels, and values are averaged over all levels to determine an approximation for a 25% error reduction level.

Table 5.

As mentioned in section 2c, the RAP system uses partial cycling whereas the previous RUC (used for the OSEs of B10) did not. Since the verified forecasts are valid only at 0000 and 1200 UTC, the timing of the partial cycle introduction into the full cycle affects the influence of certain observation types. For example, for 3-h forecasts valid at 1200 UTC (initialized at 0900 UTC), only 7 h of RAP data assimilation have occurred (six during the partial cycle from 0300 to 0800 UTC, and once for the full cycle at 0900 UTC; cf. Fig. 2). Older observations have an influence only through the GFS initial conditions introduced at 0300 UTC; thus, we would not expect to see any impact of denying (within the RAP) the assimilation of observations before 0300 UTC. Since rawinsondes are primarily available only at 0000 and 1200 UTC, we would not expect to see any impact of rawinsondes in 3-h forecasts in our experiments. In general, 6-h forecasts are expected to show the strongest impact of observations since they reflect the longest period of RAP data assimilation, and impacts should also generally decrease with increasing forecast lead time as the model initial conditions become less important. All experiments carried the same specified set of observations (or denial) through the partial cycling and the primary cycle with the RAP forecasts.

4. Experiment results

In this section, we present our verification results for the OSEs in a similar format as B10, verifying upper-air RAP forecasts against 12-h rawinsonde observations. Results are presented in the form of “candlestick plots,” wherein each data-denial experiment is assigned a bar height associated with the degradation in RMSE (averaged against all observations within the region, vertical layer, and time of interest) as compared with the control run. Within this framework, the data-denial experiments are anticipated to generally have positive bars, indicating an increase in RMSE (or a degradation in forecast skill) as compared to the control run. Negative bars indicate that denying that particular dataset actually results in a reduction in RMSE (or a forecast improvement). Error bars are added to figures in this study to show the range of one standard error (i.e., the 67% confidence interval) calculated using the method of Weatherhead et al. (1998), as applied and described by B10. The standard error is calculated from the sample standard deviation and the lag-1 autocorrelation for the time series of RMSE differences as verified against rawinsonde observations (following the method of B10).

a. Overview results for the full depth (1000–100 hPa) over North America

As in B10, we begin with a broad view of the results, considering first the impact of the different observation types for each of the three experiment seasons within a vertically integrated column: 1000–100 hPa for wind (Fig. 4) and temperature (Fig. 5), and 1000–400 hPa for RH (Fig. 6), through the entire RAP domain (see Fig. 1).

Fig. 4.
Fig. 4.

Differences in wind RMS vector error (vs rawinsonde) between observation denial experiments listed in Table 4 and the control run for the 1000–100-hPa vector wind (m s−1) for the RAP (North America) domain (see Fig. 1). Results for each of the 10 observational denial experiments are coded with a different color [raob, navy blue; aircraft, pink; profiler, steel blue; radar reflectivity, purple; VAD, light blue; GPS PW, forest green; GOES satellite observations, light green; surface, red; mesonet, yellow (shown only for summer); satellite cloud-drift winds, sky blue]. Four adjacent bars are shown for each OSE for 3-, 6-, 9-, and 12-h forecasts. Results are shown for three seasons (Table 3): (a) summer, (b) spring transition, and (c) winter. Statistical uncertainties are indicated for each OSE by the narrow black vertical lines showing ±1 standard error from the mean impact.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

Fig. 5.
Fig. 5.

As in Fig. 4, but for temperature RMS error (K).

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

Fig. 6.
Fig. 6.

As in Fig. 4, but for 1000–400-hPa RH RMS error (% from 0 to 100).

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

For wind forecasts in this full tropospheric (and lower stratospheric) 1000–100-hPa layer, out of the 10 observation types, aircraft clearly have the largest impact and similarly over all three seasons: 0.3 m s−1 for 6-h forecasts (Fig. 4). Aircraft data reduce tropospheric wind error at 6 h by ~25% over the RAP domain (per Table 5). GOES and raobs gave a positive but far smaller impact of 0.05–0.10 m s−1 during each of the three seasons. Surface observations gave similar impacts for wind forecasts over this full 1000–100-hPa layer in the spring and summer seasons, with less impact in winter. All of the cited observation types (except for mesonet and wind profiler observations) showed significant impact at the 67% level during the spring and summer seasons; winter observation impacts are much less for most observation types.

For temperatures through the deep 1000–100-hPa layer (Fig. 5), aircraft observations had the largest impact: 0.10–0.14 K at 6 h for all seasons. (A value of 0.1 K corresponds to a ~25% error reduction.) Rawinsonde and surface observations had a somewhat reduced impact: 0.02–0.06 K, which is about a 5%–12% error reduction. The impact of many of the other observation types is quite muted in all seasons.

For RH through the 1000–400-hPa layer, a broader combination of observation types contributes to improved forecast skill, led by aircraft observations at 0.5%–0.7% RH for 6-h forecasts, but joined by GOES (with both cloud–clear data and AMVs; Table 2). Rawinsonde and surface observations both contribute by 0.2%–0.5% RH for 6-h forecasts, and even GPS PW and radar reflectivity results show a positive impact to RH forecast skill through this relatively deep (1000–400 hPa) layer (Fig. 6).

Results from the multiseason periods were combined to produce the summary statistics given in Fig. 7. For all variables, aircraft observations gave the largest impact at most forecast hours, contributing to a 25% reduction (calibrated from Table 5) in wind and temperature forecast error at 3 and 6 h, and about a 10%–15% error reduction for RH forecasts. Surface observations emerge as a leading contributor to forecast skill for all three variables, with an impact comparable to that of rawinsondes. An additional experiment (not shown) denying the assimilation of only surface pressure observations revealed that nearly all of the positive impacts of surface observations come from temperature, moisture, and wind observations (with the exception of 1000–800-hPa wind at night). GOES satellite observations have an RH impact similar to that from aircraft, with a reduced impact on winds (relative to aircraft) and negligible change for temperature. Minor but significant positive wind and RH impacts are evident for all other observation types, with the possible exception of wind profiler observations and VAD winds (see below).

Fig. 7.
Fig. 7.

Observation impact results integrated over all three seasons, similar to Figs. 46, showing (a) wind, (b) temperature, and (c) RH. The horizontal black dashed lines indicate the level of 25% forecast error reduction, as shown in Table 5.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

These integrated statistics on forecast impact were also examined for temporal and vertical consistency, as shown in Fig. 8. The impact of denying aircraft data was approximately similar for each 12-h rawinsonde observation time through the summer July 2014 period, and the vertical profile for the overall RMS vector error and aircraft impact is similar to that shown in B10. All of the results shown in this study have been examined similarly (not shown), and the temporal consistency over each seasonal test period as in Fig. 8 demonstrates that the 10-day test periods are representative for each of the seasonal evaluations.

Fig. 8.
Fig. 8.

(a) Temporal and (b) vertical consistency for 6-h wind forecast error (vs raobs) for control (red) and no-aircraft (blue) experiments and the difference (black) for the July 2014 summer period verified over the entire RAP domain.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

b. Vertical variation of observation forecast impact

This section examines the vertical variation of the forecast impacts described in section 4a. The vertical variation of the wind forecast impact is shown in Fig. 9. For the summer period (Fig. 9c), aircraft observations once again dominate for mid- and upper-level wind forecasts, with an impact exceeding 0.4 m s−1 at 6 h in the 400–100-hPa layer. This represents a reduction in forecast error of approximately ⅓. Surface observations contribute the most strongly for near-surface wind forecasts (with aircraft observations taking over above 900 hPa; not shown); however, the impact of surface observations extends into the 800–400-hPa layer and, surprisingly, even above 400 hPa (Figs. 9b,c; see section 6). Above 800 hPa, surface observations exhibit their peak impact at somewhat longer lead times (6–9-h forecasts). Mesonet observations produce a minor positive impact below 400 hPa. GOES satellite observations represent the second-most important observation type for winds in the 400–100-hPa layer, with an impact of 0.18 m s−1 (or about 15%) at 6 h (Fig. 9c). AMVs contribute a 0.05 m s−1 impact, statistically significant but relatively small compared to the overall GOES impact (combined AMV and cloud). Note that the small impacts seen for some other observation types are also nevertheless often statistically significant, as a result of the large sample size. These small but significant impacts for summer upper-level winds from surface, GPS PW, and radar reflectivity data are attributed to their contributions toward improved convection forecasts.

Fig. 9.
Fig. 9.

As in Fig. 4, but for (a)–(c) summer, (d)–(f) spring, and (g)–(i) winter, and stratified by layer: (a),(d),(g) 1000–800, (b),(e),(h) 800–400, and (c),(f),(i) 400–100 hPa.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

The majority of the GOES satellite observation impact on upper-level wind apparently comes from non-AMV observations (i.e., GOES cloud observations; see Table 2). One possibility is that this effect on wind is coming from GOES cloud (and indirectly RH) influences on the convective environment, since this is a summertime period. However, Figs. 9f,i reveal that non-AMV GOES observations continue to have a strong impact on upper-level winds in the spring and even winter seasons. The vertical profile of GOES observation (cloud and AMV) impacts (Fig. 10) shows the difference in the RH bias profile from the control experiment versus the no-GOES experiment for both summer and winter. It is apparent that the GOES cloud observations are acting to dry the column, particularly in the 600–100-hPa layer (through cloud analysis “clearing” where GOES observations indicate no cloud cover). Evaluation of precipitation forecast bias verified against Stage IV precipitation estimates (not shown) shows that for both summer and winter seasons, GOES satellite observations act to reduce the high precipitation bias in the RAP forecasts. Such changes to precipitation systems within the model atmosphere also affect the wind forecast skill in all seasons (e.g., Fig. 9).

Fig. 10.
Fig. 10.

Profile of 6-h forecast RH bias (%, 0–100) for control run (red) and no-GOES experiment (blue) for (a) summer and (b) winter, verified over the entire RAP domain. Differences are shown in black. Rectangular boxes containing values significant at the 95% level are also shown at each level.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

Figure 11 shows a vertical layer-by-layer breakdown for temperature forecasts. The observation impact distribution for temperature forecasts is relatively simple, with only three major observational contributors (all in situ) to forecast skill. Aircraft observations again provide the strongest impact overall and are second only to surface observations in the 1000–800-hPa layer (Fig. 11a). Surface observations contribute to about a 0.1-K reduction of forecast RMSE at 3 h. Aircraft observations reach this level of impact above 800 hPa (Fig. 11b), with rawinsondes contributing to about a 0.05-K (relatively strong) reduction in forecast RMSEs for 6-, 9-, and 12-h forecasts above 400 hPa in the RAP domain (Fig. 11c).

Fig. 11.
Fig. 11.

As in Fig. 9, but for temperature RMS error (K) and for summer only.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

Figure 12 shows additional layer-by-layer results, now for RH RMSE. In the near-surface 1000–800-hPa layer (Fig. 12a), surface and aircraft observations have an approximately equal impact, contributing to about a 0.3% RH RMSE reduction. Rawinsondes and GPS-met PW observations represent a secondary, but still significant, importance for the 1000–800-hPa layer. Within the 800–400-hPa layer, aircraft and GOES satellite (cloud) observations are the strongest contributors to RH forecast skill (Fig. 12b), each standing at 0.75% for 6-h forecasts (about a 15%–20% reduction in RMSE). VAD wind observations have a negative impact on RH forecast skill, particularly in the low to midlevels; we hypothesize that this is due to contamination of horizontal wind observations by bird migration (mainly at night; see Fig. 13), which then has a subsequent effect on RH forecasts through increased error in the vertical motion. RH forecast errors could also result from horizontal wind errors leading to errors in the transport of features, particularly in regions of strong horizontal RH gradients. In the 400–100-hPa (essentially cirrus clouds, largely nonprecipitating) layer, GOES satellite observations overtake aircraft observations as the most important observation type (Fig. 12c), contributing to a 25% reduction in RMSE for RH, attributable to cloud (and related water vapor) data assimilation in the RAP (see B16, section 2e).

Fig. 12.
Fig. 12.

As in Fig. 9, but for RH RMS error (% from 0 to 100) and for summer only.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

Fig. 13.
Fig. 13.

As in Fig. 7, but for summer only, and for the 1000–600-hPa layer (lower troposphere) results only, and for verification times (a)–(c) daytime (0000 UTC) and (d)–(f) nighttime (1200 UTC) for (a),(d) wind, (b),(e) temperature, and (c),(f) RH.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

c. Diurnal variation of observation forecast impact

Observation impacts also exhibit diurnal variability (with the 0000 UTC raob verification time roughly representing afternoon/daytime atmospheric evolution across North America, and the 1200 UTC raob time representing largely nighttime evolution). To highlight diurnal variation occurring near the surface, we focus on the 1000–600-hPa layer for the summer test period (Fig. 13). During the daytime, there is a stronger impact for 3-h forecasts (2100 UTC initialization time) from surface observations, with a lower impact for 9–12 h (Figs. 13a–c). This is attributable to the planetary boundary layer (PBL) extension of surface observation assimilation in GSI enabled for RAP (B16, section 2f), with the deepest PBL typically occurring near 2100 UTC on average over the CONUS. Errors in forecast cloudiness could also contribute to obscuring the impact of the initial conditions on longer-lead-time low-level temperature forecasts. Neither of these factors apply at night, when stronger surface effects are seen from the 0000 UTC (12-h forecast) and 0300 UTC (9-h forecast) initializations (Figs. 13d–f). Most notable in the diurnal breakdown is the negative impact from VAD wind observations. The strong negative impact overnight on lower-tropospheric wind forecasts from VAD is likely attributable to bird and other biological activity (e.g., Lakshmanan et al. 2010; Wilczak et al. 1995), which may also be affecting overnight lower-tropospheric RH forecasts especially at 3 h, presumably through contamination of the vertical motion fields.

5. Aircraft observation impact breakdown for wind, temperature, and moisture

With the dominant impact from commercial aircraft observations shown in previous sections, a series of additional experiments were carried out to examine the contributions from different components of aircraft observations as shown in Table 4: (i) en route (defined here as pressure less than 350 hPa) versus ascent–descent (pressure greater than 350 hPa) and (ii) the specific contribution from aircraft moisture and temperature observations. We examine results for the full RAP domain. Given that the overall 1000–100-hPa aircraft impact was similar during the summer, spring, and winter seasons (Figs. 46), we focus here solely on the summer season.

Considering the tropospheric profile as a whole (Fig. 14), observations from en route aircraft and those from ascending–descending aircraft represent approximately equal contributors to the overall 1000–100-hPa wind and temperature forecast skill. Aircraft temperature observations evidently contribute indirectly to wind forecast skill (through multivariate background error covariances in GSI; Fig. 14a). Aircraft temperature observations also do not represent the only aircraft contribution to temperature forecast skill; around 0.3 K of the total 1.1-K positive impact for 6-h forecasts comes indirectly through aircraft wind observations, again through mass–wind background error covariances (Fig. 14b). The total aircraft observation impact for 1000–400-hPa RH is about 0.5% (a major contributor to overall observation impact; cf. Fig. 7). This entire impact comes from ascending–descending aircraft, and is apparently not caused in these experiments by the presence of abundant aircraft jet-level (<350 hPa) wind observations. Direct RH aircraft observations represent only about ⅓ of the total aircraft observation impact on RH forecasts. A vertical profile of RH forecast impact from aircraft observations overall versus aircraft RH observations only (Fig. 15) also confirms this ⅓ fraction overall and up to 50%–60% in the 600–500-hPa layer. It is important to note that direct aircraft moisture observations reach this level of impact despite being available in only about ⅛ of the total number of aircraft observations assimilated within the RAP system. [These results included the Water Vapor Sensing System (WVSS; Petersen et al. 2016) aircraft water vapor observations, but not Tropospheric Airborne Meteorological Data Reporting (TAMDAR; Daniels et al. 2006) aircraft water-vapor observations; TAMDAR data (Moninger et al. 2010) were not available to NOAA in real time during these test periods.]

Fig. 14.
Fig. 14.

As in Fig. 7, but for summer only and showing aircraft-specific experiment results (all aircraft, pink; cruising aircraft (above 350 hPa), orange; ascent–descent aircraft (below 350 hPa), burgundy; aircraft RH, aquamarine; aircraft temperature/RH, yellow). Results are for RMSEs of (a) 1000–100-hPa vector wind, (b) 1000–100-hPa temperature, and (c) 1000–400-hPa RH.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

Fig. 15.
Fig. 15.

Vertical forecast impact (similar to Fig. 8b) for 6-h RH forecasts valid at 0000 UTC over the CONUS from denial of aircraft observations overall (red) and denial of aircraft moisture observations only (blue). Results are from the July 2014 test period. Rectangular boxes containing values significant at the 95% level are also shown at each level.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

Looking specifically at wind forecasts in the 400–100-hPa layer (where wind forecasts are particularly important for flight planning; Fig. 16), it is seen that approximately ⅔ of the total aircraft observation impact comes from aircraft above 350 hPa, with the remaining ⅓ from ascending–descending aircraft. A significant positive impact on upper-level wind accuracy (about 25% of the overall aircraft impact) also comes from aircraft temperature observations partially through the multivariate background error covariances mentioned above but also possibly from improved convection and general precipitation forecasts as surmised for moisture-related observations and discussed concerning Fig. 9c.

Fig. 16.
Fig. 16.

As in Fig. 14, but for 400–100-hPa (top level) vector wind RMSE.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

6. Surface observation impact and related sensitivity tests

The assimilation of surface observations is an important component of the RAP system. Results shown above in this study use an improved treatment of surface-based observations with assimilation refinements determined through earlier experiments performed for this study. These refinements and related sensitivity tests are now described in this section.

Surface observations influence the RAP analysis primarily through their use within the GSI data assimilation framework (for pressure, temperature, water vapor, and wind observations), but surface ceilometer observations are one of two cloud observation sources (along with GOES) for the nonvariational 3D cloud analysis (B16; section 2e). Unexpected results from preliminary OSEs conducted within this study led to refinements to the treatment of surface observations for improving forecast skill impacts. These new experiments (Table 4) are not data-denial experiments, as shown previously in this study, but are data assimilation sensitivity tests for refinements designed to improve the impact of surface observations. Differences in these pairs of experiments are shown with candlestick plots as previously. However, for tests of these potential data assimilation improvements, a negative impact bar indicates better forecast skill for that test relative to the control experiment. Figure 17 shows forecast skill differences in the 1000–600-hPa layer for these tests. Experiment H (in red) represents the impact from the denial of surface observations using the original configuration of surface assimilation. For wind forecasts, surface observations originally had a near-neutral effect (depending on the forecast length considered). A positive impact (25% of total error for 3 h) occurred for daytime 1000–600-hPa temperature forecasts valid at 0000 UTC, but RH forecasts gave a consistent negative impact of surface observations using that original design (experiment H).

Fig. 17.
Fig. 17.

Differences in RMS error (vs rawinsondes) between surface observation assimilation experiments and control run for the (a) 1000–600-hPa vector wind (m s−1), (b) temperature (K), and (c) RH (%, 0–100) for the RAP (North America) domain (see Fig. 1) during the summer period, at 0000 UTC. Results for each of five surface assimilation experiments are coded with a different color (original surface OSE, red; deny all pseudo-observations, orange; use reduced pseudo-observation density, burgundy; apply 1200-m cloud-building height limit for METARs, yellow; combined reduced pseudo-observation density and 1200-m cloud-building height limit for METARs, purple). Four adjacent bars are shown for each experiment for 3-, 6-, 9-, and 12-h forecasts. Statistical uncertainties are indicated for each experiment by narrow black vertical lines showing ±1 standard error from the mean impact.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

Hypothesizing that the negative or small/neutral impacts of surface observations were due to the configuration of pseudoinnovations (a method of extending the influence of surface observations into the PBL in well-mixed environments; see section 2f in B16), an additional experiment was conducted in which all pseudoinnovations were withheld. Results from this experiment can be interpreted in the same way as a traditional OSE, with negative bars indicating forecast skill degradation coming from the pseudoinnovations. Figure 17 (experiment O, orange bar) confirmed that with the original configuration, the pseudoinnovations were having a negative forecast impact for all variables, particularly at shorter forecast lead times. This indicated that the earlier configuration applying pseudoinnovations was either wrong or incorrectly applied. Through a subsequent set of experiments, we determined that surface-based pseudoinnovations were applied too strongly and too high up into the PBL. An additional experiment with reduced pseudo-observation density within the PBL (from a spacing of 20–40 hPa) and a reduced maximum pseudoinnovation height [40% of the PBL height instead of 75%; PBL height calculated using virtual potential temperature as described by Benjamin et al. (2004c)] was conducted. Verification of this experiment (experiment P; Fig. 17) indicates that this configuration performs very similarly to experiment O (no pseudoinnovations), indicating that using a reduced pseudoinnovation density results in a superior forecast during the control experiment.

We also hypothesized that some of the originally neutral-to-negative surface observation impacts could be due to the use of its ceiling (ceilometer) data. Prior experiments not shown here had shown negative impacts from middle- to upper-atmosphere cloud building (but not clearing). In the configuration used in the control experiments, cloud building from GOES satellite observations was allowed only up to 1200 m above ground level (AGL), but cloud building from METARs was allowed at any level where METAR ceilometers report cloud cover [up to 12 000 ft (3657.6 m) AGL]. An additional experiment was carried out, now applying the same 1200 m AGL cloud-building height limit to METAR observations as well as to satellite observations. Figure 17 shows that, indeed, RH forecast RMSEs and wind RMS vector errors are significantly reduced with this experiment (experiment Q, yellow). A slight degradation of temperature forecasts is seen at longer forecast lengths; this is likely related to the reduction of cloud cover during the daytime and the associated increase in solar irradiance reaching the ground.

To test these modified surface assimilation design features, one final experiment was conducted with both the reduced pseudo-observation density configuration and the cloud-building height limit for METARs. This experiment is shown in Fig. 17 (experiment R; purple); it is seen that this configuration results in significantly improved wind and RH forecasts in the 1000–600-hPa layer, with a near-neutral effect on temperature forecasts. This new configuration, with modified treatment of surface observations, is the control run used for reference for the “no surface” OSEs elsewhere in this paper (experiment H in Figs. 47, 9, and 1113). Vertical profiles of surface the observation impact before and after these two assimilation changes (Fig. 18) show the change in 6-h RAP forecast impacts from negative to generally positive from these two assimilation changes. These changes described in this section both reduced the vertical extension of these surface-based observations. This improved configuration was a beneficial consequence of these RAP OSEs and the investigation of initially counterintuitive results.

Fig. 18.
Fig. 18.

As in Fig. 15, but showing the vertical forecast impact for 6-h forecasts of (a) vector wind (m s−1), (b) temperature (K), and (c) RH (%; 0–100) for the original no-surface experiment (blue) and the revised pseudo-observation (red) density configuration with a 1200-m height limit for METAR cloud building. Results are over the entire RAP domain for the summer test period. Rectangular boxes containing values significant at the 95% level are also shown at each level.

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

7. Satellite cloud-drift wind impact

The results from our initial OSE tests indicate only a very small impact coming from satellite cloud-drift winds (e.g., Fig. 7). This impact was considerably smaller than expected, motivating a more detailed examination of the use of cloud-drift winds in the RAP. Because of the dense observing network over the CONUS, and concerns regarding the quality control of satellite AMV observations over land, these data have historically been assimilated only over oceanic regions within the RUC and RAP systems. One additional experiment, allowing the assimilation of satellite AMV observations at least 100 hPa above the land surface (and requiring innovations for u- and υ-wind components to be less than 8 m s−1 at all levels) was conducted to determine whether additional AMV impacts could be achieved by assimilating these observations over land within the RAP system (e.g., Table 4).

Figure 19 shows vertical profiles of the multivariable impact on 6-h RMS error from these experiments. The original AMV OSE indicated a positive impact of approximately 0.04 m s−1 in the 400–100-hPa layer; we see that assimilating AMVs over land leads to an additional impact of approximately 0.01–0.02 m s−1, resulting in a net effect of about 0.05 m s−1 in the 400–100-hPa layer (Fig. 19a). Additional small increases in observation impact on wind RMSE are seen down to approximately 750 hPa. The impacts on temperature and RH forecasts are very small (Fig. 19). The experiment assimilating AMVs over land is used as the control experiment for the calculation of AMV impacts elsewhere in this study (experiment J in Figs. 47, 9, and 1113), with a maximum AMV impact for 400–100-hPa wind in summer (Fig. 9c) of 0.05 m s−1 at 6-h duration, smaller than for some other observation types but statistically significant (per candlestick “wick”).

Fig. 19.
Fig. 19.

As in Fig. 18, but showing the vertical forecast impact for the original no-AMV experiment (blue) and the experiment assimilating AMVs over land as well as over water (red).

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

8. Verification of surface forecasts

While upper-level forecasts are important for aviation and other applications, surface weather forecasts are needed by other user communities. Figure 20 shows several of the summer and winter experiments verified against METAR observations over the High-Resolution Rapid Refresh (HRRR) domain (see Fig. 1). Only the larger-impact experiments are shown in these plots. It is apparent that surface observations have a very strong impact on surface forecast skill, particularly for temperature and moisture forecasts. Surface observations have the strongest impact for forecasts initialized during the late afternoon and early evening during the summer. Aircraft observations have the second-largest impact, with the differences peaking for forecasts initialized during the daytime and exceeding those of surface observations during summer for 2-m temperature and 10-m winds from 1600 to 2000 UTC (daytime). GPS-met PW observations contribute modestly to dewpoint forecasts during the summer period, while rawinsondes contribute also only modestly to forecast skill with their twice-daily launches. Other experiments not shown here generally exhibit smaller impacts.

Fig. 20.
Fig. 20.

Average diurnal cycle of RAP 6-h forecast RMSEs vs METAR observations for (a),(d) temperature (K), (b),(e) dewpoint temperature (K), and (c),(f) vector wind (m s−1) for the (a)–(c) summer and (d)–(f) winter periods, verified over the HRRR domain (see Fig. 1). Shown are the no-surface experiment (green), the no-raobs experiment (black), the no GPS-PW experiment (blue), and the no-aircraft experiment (red).

Citation: Monthly Weather Review 145, 8; 10.1175/MWR-D-16-0398.1

9. Conclusions from new RAP OSEs in this study

An extensive set of observation system experiments for three seasons was conducted with the hourly updated Rapid Refresh model using hybrid ensemble–variational data assimilation. Experiments were conducted in each season for nine different observation types. It was found that the heterogeneous observation system over North America is effective for reducing 3–12-h tropospheric forecast errors, with contributions from all observing systems. Aircraft data were the most important observation type overall for short-range forecasts of wind, RH, and temperature with error reductions of 15%–30% for 6-h forecasts on average in the troposphere and lower stratosphere. Aircraft observations are followed by satellite (GOES) and surface observations. Also evident are strong cross-variable impacts between mass, wind, and moisture variables due, in part, to the GSI hybrid ensemble–variational data assimilation.

The greatest impact on wind forecasts comes from aircraft observations, followed by rawinsonde observations; this result over a North American domain is consistent with that found by B10 over a domain covering the lower 48 United States, except that rawinsondes have relatively less impact here. In addition, GOES satellite observations and surface observations contribute at a similar level as rawinsondes. The considerably lower relative impact of rawinsondes in this study is possibly due to the relatively steady volume of rawinsonde observations over the years, while other observation types have increased in coverage, quality, and inclusion via improved data assimilation.

For temperature forecasts, B10 had found that the impact of aircraft observations was approximately equal to that of rawinsondes, and that surface observations contributed at a comparable level during the summer. In this new study, while surface observations remain equally important to rawinsondes on overall temperature forecasts, aircraft observations now far outweigh either in importance.

For RH forecasts, B10 found that rawinsondes were the most important observation type, with aircraft, GPS PW, and surface observations playing a secondary role. In these new RAP OSEs, it is seen that aircraft, rawinsonde, and surface observations contribute nearly equally for 1000–400-hPa RH forecasts. GOES satellite observations also emerge as a major player for RH forecast skill, with lesser (but still significant) impacts coming from radar reflectivity and GPS PW observations. Radar reflectivity observations remain critical for the initialization of higher-resolution rapidly updated NWP systems but, evidently, also contribute to forecast skill at 13-km grid spacing.

A more detailed investigation of aircraft observation impact was made through additional experiments with subsets of aircraft data. For upper-level (400–100 hPa) wind forecast accuracy (important for aviation operations, for instance), about ¼ of this impact was from ascent–descent aircraft reports below 350 hPa (with this fraction increasing with forecast lead time). Without temperature and RH aircraft observations, the upper-level short-range wind forecast skill impact overall from aircraft was decreased by about ⅓. Additional, more detailed, aircraft-data experiments revealed that about ⅔ of the significant aircraft impact on RH forecasts was from wind and temperature observations, but that this impact was increased by about another 50% with the inclusion of aircraft water vapor observations [Water Vapor Sensing System version two (WVSS-II); Petersen et al. (2016)] contained within approximately ⅛ of aircraft reports.

Significant cross-variable impacts from GOES satellite observations (atmospheric motion vectors and cloud-top retrievals) are evident within our experiments. This set of observations was not tested by B10; however, the results presented here suggest that GOES observations play an important role within the observation suite. GOES satellite observations influence the RAP NWP system through two avenues: AMVs derived from feature tracking of satellite imagery, and cloud information: the presence or absence and their height when present. It is evident from the results presented here that the greater influence by far comes from the GOES cloud information. Cloud building and clearing has a direct influence on RH forecasts through the cloud analysis (described in B16), but also apparently a fairly strong cross-variable influence on temperature and wind forecasts; we hypothesize that this is related to the improved evolution of precipitation systems in all seasons (see section 4b).

Initial results from the OSEs conducted in this study led directly to several improvements in the configuration of the surface observation assimilation within the RAP/GSI system. First, it was determined that the prior configuration of applying “pseudo-observations” within the model PBL was too heavy handed, exerting too strong an influence on the model initial conditions. A revised configuration was tested, in which pseudo-observations were inserted only every 40 hPa instead of every 20 hPa, and the pseudo-observations were only inserted up to 40% of the PBL height instead of 75%. In addition, we examined the impact of limiting the METAR cloud building to 1200 m above ground level (the same limitation applied to cloud building based upon satellite observations). Both of these tests yielded favorable results, and future operational versions of the RAP and HRRR will take advantage of these developments.

Additional tests on AMV assimilation, motivated by the initially limited impact of these observations, revealed that only minimal additional impact is achieved by assimilating these observations over land. The magnitude of the additional impact appears to be unrelated to the paucity of observations in the region considered, with similar results over Alaska. Impact in this study was measured only over land (against rawinsondes) and the AMV impact is likely stronger over oceanic regions. Further work investigating the use of these observations, particularly height assignment in the tracking of optically thin features, would help to optimize their impact within rapidly updating NWP systems.

The only negative observation impact seen within this study was for radar-derived VAD wind observations during the nighttime in the warm season. The negative impact was seen chiefly for forecasts initialized during the night (i.e., valid at 1200 UTC), which is commonly recognized as the main time of “biological echoes” being observed by radar (i.e., Wilczak et al. 1995). This highlights the need for improved quality control (QC) of these observations [as expected with dual-polarization radar, e.g., Ryzhkov et al. (2005); Tang et al. (2014)]. VAD wind observations with an additional level of QC are now available operationally; future work will examine the impact of this additional QC step.

In summary, the current observation suite provides an invaluable heterogeneous dataset for the initialization of rapidly updating NWP systems across North America. The dominance of commercial aircraft data, whose availability provided the initial motivation for developing hourly cycling models, persists in present-day systems, and will likely continue for future versions of the RAP and other rapidly updated models including the 3-km HRRR. As regional NWP systems continue to shift toward an ensemble data assimilation and ensemble forecast framework over the next several years (e.g., Whitaker et al. 2008; Tollerud et al. 2013), rapidly updating models are poised to continue to excel by taking advantage of these asynoptic observations.

Acknowledgments

The RAP was developed under significant support from NOAA, the Federal Aviation Administration (FAA), and the Department of Energy. The authors thank Dr. Ming Hu of ESRL/GSD for his help in configuring the experiments presented here; Dr. John M. Brown, also of GSD, for an insightful review of this manuscript; and three anonymous reviewers for further helpful comments.

REFERENCES

  • Arnold, C. P., Jr., and C. H. Dey, 1986: Observing-systems simulation experiments: Past, present, and future. Bull. Amer. Meteor. Soc., 67, 687695, doi:10.1175/1520-0477(1986)067<0687:OSSEPP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., and et al. , 2004a: An hourly assimilation–forecast cycle: The RUC. Mon. Wea. Rev., 132, 495518, doi:10.1175/1520-0493(2004)132<0495:AHACTR>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., B. E. Schwartz, E. J. Szoke, and S. E. Koch, 2004b: The value of wind profiler data in U.S. weather forecasting. Bull. Amer. Meteor. Soc., 85, 18711886, doi:10.1175/BAMS-85-12-1871.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., S. Weygandt, D. Devenyi, J. M. Brown, G. Manikin, T. L. Smith, and T. Smirnova, 2004c: Improved moisture and PBL initialization in the RUC using METAR data. 22nd Conf. on Severe Local Storms, Hyannis, MA, Amer. Meteor. Soc., 17.3. [Available online at https://ams.confex.com/ams/11aram22sls/techprogram/paper_82023.htm.]

  • Benjamin, S. G., B. D. Jamison, W. R. Moninger, S. R. Sahm, B. Schwartz, and T. W. Schlatter, 2010: Relative short-range forecast impact from aircraft, profiler, radiosonde, VAD, GPS-PW, METAR, and mesonet observations via the RUC hourly assimilation cycle. Mon. Wea. Rev., 138, 13191343, doi:10.1175/2009MWR3097.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Benjamin, S. G., and et al. , 2016: A North American hourly assimilation and model forecast cycle: The Rapid Refresh. Mon. Wea. Rev., 144, 16691694, doi:10.1175/MWR-D-15-0242.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bresky, W. C., J. M. Daniels, A. A. Bailey, and S. T. Wanzong, 2012: New methods toward minimizing the slow speed bias associated with atmospheric motion vectors. J. Appl. Meteor. Climatol., 51, 21372151, doi:10.1175/JAMC-D-11-0234.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cardinali, C., 2009: Monitoring the observation impact on the short-range forecast. Quart. J. Roy. Meteor. Soc., 135, 239258, doi:10.1002/qj.366.

  • Côté, J., M. M. Roch, A. Staniforth, and L. Fillion, 1993: A variable-resolution semi-Lagrangian finite-element global model of the shallow water equations, Mon. Wea. Rev., 121, 231243, doi:10.1175/1520-0493(1993)121<0231:AVRSLF>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Daniels, T. S., W. R. Moninger, and R. D. Mamrosh, 2006: Tropospheric Airborne Meteorological Data Reporting (TAMDAR) overview. 10th Symp. on Integrated Observing and Assimilation Systems for Atmosphere, Oceans, and Land Surface, Atlanta, GA, Amer. Meteor. Soc., 9.1. [Available online at https://ams.confex.com/ams/Annual2006/techprogram/paper_104773.htm.]

  • Gutman, S. I., and S. G. Benjamin, 2001: The role of ground-based GPS meteorological observations in numerical weather prediction. GPS Solutions, 4, 1624, doi:10.1007/PL00012860.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hollingsworth, A., P. Lonnberg, L. Illari, K. Arpe, and A. J. Simmons, 1986: Monitoring of observation and analysis quality by a data assimilation system. Mon. Wea. Rev., 114, 861879, doi:10.1175/1520-0493(1986)114<0861:MOOAAQ>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, X.-Y., and P. Lynch, 1993: Diabatic digital-filtering initialization: Application to the HIRLAM model. Mon. Wea. Rev., 121, 589603, doi:10.1175/1520-0493(1993)121<0589:DDFIAT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Ingleby, B., 2015: Global assimilation of air temperature, humidity, wind and pressure from surface stations. Quart. J. Roy. Meteor. Soc., 141, 504517, doi:10.1002/qj.2372.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kalnay, E., S. J. Lord, and R. D. McPherson, 1998: Maturity of operational numerical weather prediction: Medium range. Bull. Amer. Meteor. Soc., 79, 27532769, doi:10.1175/1520-0477(1998)079<2753:MOONWP>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kelleher, K. E., and et al. , 2007: Project CRAFT: A real-time delivery system for NEXRAD level II data via the Internet. Bull. Amer. Meteor. Soc., 88, 10451057, doi:10.1175/BAMS-88-7-1045.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kleist, D. T., D. F. Parrish, J. C. Derber, R. Treadon, W.-S. Wu, and S. Lord, 2009: Introduction of the GSI into the NCEP Global Data Assimilation System. Wea. Forecasting, 24, 16911705, doi:10.1175/2009WAF2222201.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lakshmanan, V., J. Zhang, and K. Howard, 2010: A technique to censor biological echoes in radar reflectivity. J. Appl. Meteor. Climatol., 49, 453462, doi:10.1175/2009JAMC2255.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Langland, R. H., and N. L. Baker, 2004: Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus, 56, 189201, doi:10.3402/tellusa.v56i3.14413.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Langland, R. H., and et al. , 2016: Forecast Sensitivity–Observation Impact (FSOI) Inter-comparison Experiment. Third Int. Winds Workshop, Monterey, CA, NOAA–EUMETSAT–WMO. [Available online at http://cimss.ssec.wisc.edu/iwwg/iww13/talks/01_Monday/1650_IWW13_NRL_FSOI_Langland.pdf.]

  • Le Marshall, J., and et al. , 2007: The Joint Center for Satellite Data Assimilation. Bull. Amer. Meteor. Soc., 88, 329340, doi:10.1175/BAMS-88-3-329.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lhermitte, R. M., and D. Atlas, 1960: Precipitation motion by pulse Doppler. Preprints, Ninth Weather Radar Conf., Kansas City, MO, Amer. Meteor. Soc., 218–223.

  • Lin, H., S. S. Weygandt, S. G. Benjamin, and M. Hu, 2017: Satellite radiance data assimilation within the hourly updated Rapid Refresh. Wea. Forecasting, doi:10.1175/WAF-D-16-0215.1, in press.

    • Search Google Scholar
    • Export Citation
  • Lupu, C., C. Cardinali, and A. P. McNally, 2015: Adjoint-based forecast sensitivity applied to observation-error variance tuning. Quart. J. Roy. Meteor. Soc., 141, 31573165, doi:10.1002/qj.2599.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McMurdie, L., and C. Mass, 2004: Major numerical forecast failures over the northeast Pacific. Wea. Forecasting, 19, 338356, doi:10.1175/1520-0434(2004)019<0338:MNFFOT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Minnis, P., and et al. , 2008: Near-real time cloud retrievals from operational and research meteorological satellites. Remote Sensing of Clouds and the Atmosphere XIII, R. H. Picard et al., Eds., International Society for Optical Engineering (SPIE Proceedings, Vol. 7107-2), 710703, doi:10.1117/12.800344.

    • Crossref
    • Export Citation
  • Minnis, P., and et al. , 2011: CERES edition-2 cloud property retrievals using TRMM VIRS and Terra and Aqua MODIS data—Part I: Algorithms. IEEE Trans. Geosci. Remote Sens., 49, 43744400, doi:10.1109/TGRS.2011.2144601.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moninger, W. R., R. D. Mamrosh, and P. M. Pauley, 2003: Automated meteorological reports from commercial aircraft. Bull. Amer. Meteor. Soc., 84, 203216, doi:10.1175/BAMS-84-2-203.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moninger, W. R., S. G. Benjamin, B. D. Jamison, T. W. Schlatter, T. L. Smith, and E. J. Szoke, 2010: Evaluation of regional aircraft observations using TAMDAR. Wea. Forecasting, 25, 627645, doi:10.1175/2009WAF2222321.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • NOAA, 2013: NOAA research program overview: Sandy supplemental. NOAA Rep., 2 pp. [Available online at http://research.noaa.gov/sites/oar/Documents/oarProgramOverview_SandySupplemental_CC.pdf.]

  • Peckham, S. E., T. G. Smirnova, S. G. Benjamin, J. M. Brown, and J. S. Kenyon, 2016: Implementation of a digital filter initialization in the WRF Model and its application in the Rapid Refresh. Mon. Wea. Rev., 144, 99106, doi:10.1175/MWR-D-15-0219.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Petersen, R. A., 2016: On the impacts and benefits of AMDAR observations in operational forecasting. Part I: A review of the impacts of automated aircraft wind and temperature reports. Bull. Amer. Meteor. Soc., 97, 585602, doi:10.1175/BAMS-D-14-00055.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Petersen, R. A., L. Cronce, R. Mamrosh, R. Baker, and P. Pauley, 2016: On the impact and future benefits of AMDAR observations in operational forecasting. Part II: Water vapor observations. Bull. Amer. Meteor. Soc., 97, 21172133, doi:10.1175/BAMS-D-14-00211.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Rogers, E., and et al. , 2009: The NCEP North American Mesoscale modeling system: Recent changes and future plans. 23rd Conf. on Weather Analysis and Forecasting/19th Conf. on Numerical Weather Prediction, Omaha, NE, Amer. Meteor. Soc., 2A4. [Available online at https://ams.confex.com/ams/23WAF19NWP/techprogram/paper_154114.htm.]

  • Ryzhkov, A., S. E. Giangrande, V. M. Melnikov, and T. J. Schuur, 2005: Calibration issues of dual-polarization radar measurements. J. Atmos. Oceanic Technol., 22, 11381155, doi:10.1175/JTECH1772.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Shao, H., and et al. , 2016: Bridging research to operations transitions: Status and plans of community GSI. Bull. Amer. Meteor. Soc., 97, 14271440, doi:10.1175/BAMS-D-13-00245.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Shapiro, M., and A. Thorpe, 2004: THORPEX international science plan, version 3. WMO/TD-1246, WWRP/THORPEX 2, 55 pp. [Available online at www.wmo.int/pages/prog/arep/wwrp/new/documents/CD_ROM_international_science_plan_v3.pdf.]

  • Skamarock, W. C., and et al. , 2008: A description of the Advanced Research WRF version 3. NCAR Tech. Note NCAR/TN-475+STR, 113 pp., doi:10.5065/D68S4MVH.

    • Crossref
    • Export Citation
  • Smith, T. L., S. G. Benjamin, S. I. Gutman, and S. Sahm, 2007: Short-range forecast impact from assimilation of GPS-IPW observations into the Rapid Update Cycle. Mon. Wea. Rev., 135, 29142930, doi:10.1175/MWR3436.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tang, L., J. Zhang, C. Langston, J. Krause, K. Howard, and V. Lakshmanan, 2014: A physically based precipitation–nonprecipitation radar echo classifier using polarimetric and environmental data in a real-time national system. Wea. Forecasting, 29, 11061119, doi:10.1175/WAF-D-13-00072.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tollerud, E. I., and et al. , 2013: The DTC ensembles task: A new testing and evaluation facility for mesoscale ensembles. Bull. Amer. Meteor. Soc., 94, 321327, doi:10.1175/BAMS-D-11-00209.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Velden, C., and et al. , 2005: Recent innovations in deriving tropospheric winds from meteorological satellites. Bull. Amer. Meteor. Soc., 86, 205223, doi:10.1175/BAMS-86-2-205.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wang, X., D. Parrish, D. Kleist, and J. Whitaker, 2013: GSI 3DVar-based ensemble–variational hybrid data assimilation for NCEP Global Forecast System: Single-resolution experiments. Mon. Wea. Rev., 141, 40984117, doi:10.1175/MWR-D-12-00141.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weatherhead, E. C., and et al. , 1998: Factors affection the detection of trends: Statistical considerations and applications to environmental data. J. Geophys. Res., 103, 17 14917 161, doi:10.1029/98JD00995.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weckwerth, T. M., 2000: The effect of small-scale moisture variability on thunderstorm initiation. Mon. Wea. Rev., 128, 40174030, doi:10.1175/1520-0493(2000)129<4017:TEOSSM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Weckwerth, T. M., and et al. , 2004: An overview of the International H2O Project (IHOP_2002) and some preliminary highlights. Bull. Amer. Meteor. Soc., 85, 253277, doi:10.1175/BAMS-85-2-253.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., and T. M. Hamill, 2002: Ensemble data assimilation without perturbed observations. Mon. Wea. Rev., 130, 19131924, doi:10.1175/1520-0493(2002)130<1913:EDAWPO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Whitaker, J. S., T. M. Hamill, X. Wei, Y. Song, and Z. Toth, 2008: Ensemble data assimilation with the NCEP Global Forecast System. Mon. Wea. Rev., 136, 463482, doi:10.1175/2007MWR2018.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wilczak, J. M., and et al. , 1995: Contamination of wind profiler data by migrating birds: Characteristics of corrupted data and potential solutions. J. Atmos. Oceanic Technol., 12, 449467, doi:10.1175/1520-0426(1995)012<0449:COWPDB>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wolfe, D. E., and S. I. Gutman, 2000: Developing an operational, surface-based, GPS, water vapor observing system for NOAA: Network design and results. J. Atmos. Oceanic Technol., 17, 426440, doi:10.1175/1520-0426(2000)017<0426:DAOSBG>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • WMO, 2016: USA AMDAR program—Smoothed monthly average of daily (aircraft) report totals. [Available online at https://www.wmo.int/pages/prog/www/GOS/ABO/data/statistics/aircraft_obs_cmc_mthly_ave_daily_reports_by_type.jpg.]

  • Wu, W.-S., R. J. Purser, and D. F. Parrish, 2002: Three-dimensional variational analysis with spatially inhomogeneous covariances. Mon. Wea. Rev., 130, 29052916, doi:10.1175/1520-0493(2002)130<2905:TDVAWS>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhu, Y., and R. Gelaro, 2008: Observation sensitivity calculations using the adjoint of the Gridpoint Statistical Interpolation (GSI) analysis system. Mon. Wea. Rev., 136, 335351, doi:10.1175/MWR3525.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
Save