• Andrews, P., 1997: The development of an operational variational data analysis scheme at the UKMO. NWP On-Line Scientific Note No. 5, Meteorological Office, Bracknell, Berkshire, United Kingdom. [Available online at http://www.met-office.gov.uk/sec5/NWP/NWP_ScienceNotes/No5/No5.html.].

  • Anthes, R. A., 1977: A cumulus parameterization scheme utilizing a one-dimensional cloud model. Mon. Wea. Rev.,105, 270–286.

  • ——, and T. T. Warner, 1978: Development of hydrodynamic models suitable for air pollution and other mesometeorological studies. Mon. Wea. Rev.,106, 1045–1078.

  • ——, E.-Y. Hsie, and Y.-H. Kuo, 1987: Description of the Penn State/NCAR mesoscale model version 4 (MM4). NCAR Tech. Note NCAR/TN-282+STR, 66 pp. [Available from National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307.].

  • ——, Y.-H. Kuo, E.-Y. Hsie, S. Low-Nam, and T. W. Bettge, 1989: Estimation of skill and uncertainty in regional numerical models. Quart. J. Roy. Meteor. Soc.,115, 763–806.

  • Arakawa, A., and V. R. Lamb, 1977: Computational design of the basic dynamical processes of the UCLA general circulation model. Methods in Computational Physics, B. Adler et al., Eds., Vol. 17, Academic Press, 173–265.

  • Arnold, C. P., Jr., and C. H. Dey, 1986: Observing-systems simulation experiments: Past, present, and future. Bull. Amer. Meteor. Soc.,67, 687–695.

  • Atlas, R., 1997: Atmospheric observations and experiments to assess their usefulness in data assimilation. J. Meteor. Soc. Japan,75, 111–130.

  • Benjamin, S. G., and N. L. Seaman, 1985: A simple scheme for objective analyses in curved flow. Mon. Wea. Rev.,113, 1184–1198.

  • ——, K. J. Brundage, and L. L. Morone, 1994: The Rapid Update Cycle. Part I: Analysis/model description. Technical Procedures Bull. 416, NOAA/NWS, 16 pp. [Available from National Weather Service, Office of Meteorology, 1325 East–West Highway, Silver Spring, MD 20910.].

  • ——, J. M. Brown, K. J. Brundage, B. E. Schwartz, T. G. Smirnova, and T. L. Smith, 1998: The operational RUC-2. Preprints, 16th Conf. on Weather Analysis and Forecasting, Phoenix, AZ, Amer. Meteor. Soc., 249–252.

  • Burgers, G., P. J. van Leeuwen, and G. Evensen, 1998: Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev.,126, 1719–1724.

  • Charney, J., M. Halem, and R. Jastrow, 1969: Use of incomplete historical data to infer the present state of the atmosphere. J. Atmos. Sci.,26, 1160–1163.

  • Cohn, S. E., N. S. Sivakumaran, and R. Todling, 1994: A fixed-lag Kalman smoother for retrospective data assimilation. Mon. Wea. Rev.,122, 2838–2867.

  • Courtier, P., and Coauthors, 1998: The ECMWF implementation of three-dimensional variational assimilation (3D-Var). I: Formulation. Quart. J. Roy. Meteor. Soc.,124, 1783–1807.

  • Cressman, G., 1959: An operational objective analysis system. Mon. Wea. Rev.,87, 367–374.

  • Cullen, M. J. P., 1993: The unified forecast/climate model. Meteor. Mag.,122, 81–94.

  • Daley, R., 1991: Atmospheric Data Analysis. Cambridge University Press, 457 pp.

  • Derber, J., and F. Bouttier, 1999: A reformulation of the background error covariance in the ECMWF global data assimilation system. Tellus,51A, 195–221.

  • Dévényi, D., and T. W. Schlatter, 1994: Statistical properties of three-hour prediction “errors” derived from the Mesoscale Analysis and Prediction System. Mon. Wea. Rev.,122, 1263–1280.

  • ——, and S. G. Benjamin, 1998: Application of a three-dimensional variational analysis in RUC-2. Preprints, 12th Conf. on Numerical Weather Prediction, Phoenix, AZ, Amer. Meteor. Soc., 37–40.

  • Dudhia, J., 1989: Numerical study of convection observed during the winter monsoon experiment using a mesoscale two-dimensional model. J. Atmos. Sci.,46, 3077–3107.

  • Errico, R. M., 1999: Meeting summary: Workshop on assimilation of satellite data. Bull. Amer. Meteor. Soc.,80, 463–471.

  • Evensen, G., 1997: Advanced data assimilation for strongly nonlinear dynamics. Mon. Wea. Rev.,125, 1342–1354.

  • Fast, J. D., 1995: Mesoscale modeling and four-dimensional data assimilation in areas of highly complex terrain. J. Appl. Meteor.,34, 2762–2782.

  • Fischer, C., A. Joly, and F. Lalaurette, 1998: Error growth and Kalman filtering within an idealized baroclinic flow. Tellus,50A, 596–615.

  • Gandin, L. S., 1963: Objective Analysis of Meteorological Fields. Gidrometeorolgicheskoe Izdatel’stvo, Leningrad. [Translated from Russian, 1965, Israel Program for Scientific Translations, 242 pp.].

  • Goodge, G. W., Ed., 1992: Storm data and unusual weather phenomena. Storm Data, Vol. 34, No. 1, 54 pp. [Available from National Climatic Data Center, Federal Building, 37 Battery Park Avenue, Asheville, NC 28801-2733.].

  • Grell, G. A., J. Dudhia, and D. R. Stauffer, 1994: A description of the fifth-generation Penn State/NCAR mesoscale model (MM5). NCAR Tech. Note NCAR/TN-398+STR, 138 pp. [Available from National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307.].

  • Guo, Y.-R., Y.-H. Kuo, J. Dudhia, and D. Parsons, 2000: Four-dimensional variational data assimilation of heterogeneous mesoscale observations for a strong convective case. Mon. Wea. Rev.,128, 619–643.

  • Harms, D. E., S. Raman, and R. V. Madala, 1992: An examination of four-dimensional data-assimilation techniques for numerical weather prediction. Bull. Amer. Meteor. Soc.,73, 425–440.

  • Hoke, J. E., and R. A. Anthes, 1976: The initialization of numerical models by a dynamic-initialization technique. Mon. Wea. Rev.,104, 1551–1556.

  • Houtekamer, P. L., and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev.,126, 796–811.

  • Ide, K., P. Courtier, M. Ghil, and A. C. Lorenc, 1997: Unified notation for data assimilation: Operational, sequential and variational. J. Meteor. Soc. Japan.,75, 181–189.

  • Kain, J. S., and J. M. Fritsch, 1990: A one-dimensional entraining/detraining plume model and its application in convective parameterization. J. Atmos. Sci.,47, 2784–2802.

  • ——, and ——, 1993: Convective parameterization for mesoscale models: The Kain–Fritsch scheme. The Representation of Cumulus Convection in Numerical Models, Meteor. Monogr., No. 46, Amer. Meteor. Soc., 165–170.

  • Kalnay, E., and Coauthors, 1997: Data assimilation in the ocean and in the atmosphere: What should be next? J. Meteor. Soc. Japan,75, 489–496.

  • Klinker, E., F. Rabier, G. Kelly, and J.-F. Mahfouf, 2000: The ECMWF operational implementation of four-dimensional variational assimilation. III: Experimental results and diagnostics with operational configuration. Quart. J. Roy. Meteor. Soc.,126, 1191–1215.

  • Kuo, Y.-H., and Y.-R. Guo, 1989: Dynamic initialization using observations from a hypothetical network of profilers. Mon. Wea. Rev.,117, 1975–1998.

  • Lanzinger, A., and R. Steinacker, 1990: A fine mesh analysis scheme designed for mountainous terrain. Meteor. Atmos. Phys.,43, 213–219.

  • LeDimet, F.-X., and O. Talagrand, 1986: Variational algorithms for analysis and assimilation of meteorological observations. Tellus,38A, 97–110.

  • LeMarshall, J. F., L. M. Leslie, and C. Spinoso, 1997: The generation and assimilation of cloud-drift winds in numerical weather prediction. J. Meteor. Soc. Japan,75, 383–393.

  • Leslie, L. M., J. F. LeMarshall, R. P. Morison, C. Spinoso, R. J. Purser, N. Pescod, and R. Seecamp, 1998: Improved hurricane track forecasting from the continuous assimilation of high quality satellite wind data. Mon. Wea. Rev.,126, 1248–1257.

  • Lewis, J. M., and J. C. Derber, 1985: The use of adjoint equations to solve a variational adjustment problem with advective constraints. Tellus,37A, 309–322.

  • Lönnberg, P., and A. Hollingsworth, 1986: The statistical structure of short-range forecast errors as determined from radiosonde data. II: The covariance of height and wind errors. Tellus,38A, 137–161.

  • Lorenc, A. C., 1986: Analysis methods for numerical weather prediction. Quart. J. Roy. Meteor. Soc.,112, 1177–1194.

  • ——, R. S. Bell, and B. MacPherson, 1991: The Meteorological Office analysis correction data assimilation scheme. Quart. J. Roy. Meteor. Soc.,117, 59–89.

  • McPherson, R. D., 1975: Progress, problems, and prospects in meteorological data assimilation. Bull. Amer. Meteor. Soc.,56, 1154–1166.

  • Michelson, S. A., and N. L. Seaman, 2000: Assimilation ofNEXRAD-VAD winds in summertime meteorological simulations over the northeastern United States. J. Appl. Meteor.,39, 367–383.

  • Miller, P. A., and S. G. Benjamin, 1992: A system for the hourly assimilation of surface observations in mountainous and flat terrain. Mon. Wea. Rev.,120, 2342–2359.

  • Parrish, D. F., and J. C. Derber, 1992: The National Meteorological Center’s spectral statistical interpolation analysis system. Mon. Wea. Rev.,120, 1747–1763.

  • ——, ——, R. J. Purser, W.-S. Wu, and Z.-X. Pu, 1997: The NCEP global analysis system: Recent improvements and future plans. J. Meteor. Soc. Japan,75, 359–365.

  • Parsons, D. B., and J. Dudhia, 1997: Observing system simulation experiments and objective analysis tests in support of the goals of the Atmospheric Radiation Measurement Program. Mon. Wea. Rev.,125, 2353–2381.

  • Pu, Z.-X., E. Kalnay, J. C. Derber, and J. G. Sela, 1997a: Using forecast sensitivity patterns to improve future forecast skill. Quart. J. Roy. Meteor. Soc.,123, 1035–1053.

  • ——, ——, D. Parrish, W. Wu, and Z. Toth, 1997b: The use of bred vectors in the NCEP global 3D variational analysis system. Wea. Forecasting,12, 689–695.

  • Rabier, F., H. Järvinen, E. Klinker, J.-F. Mahfouf, and A. Simmons, 2000: The ECMWF operational implementation of four-dimensional variational assimilation. I: Experimental results with simplified physics. Quart. J. Roy. Meteor. Soc.,126, 1143–1170.

  • Rogers, E., D. G. Deaven, and G. J. DiMego, 1995: The regional analysis system for the operational “early” Eta Model: Original 80-km configuration and recent changes. Wea. Forecasting,10, 810–825.

  • Sasaki, Y., 1970: Some basic formulations in numerical variational analysis. Mon. Wea. Rev.,98, 875–883.

  • Schraff, C. H., 1997: Mesoscale data assimilation and prediction of low stratus in the alpine region. Meteor. Atmos. Phys.,64, 21–50.

  • Seaman, N. L., 2000: Meteorological modeling for air-quality assessments. Atmos. Environ.,34, 2231–2259.

  • ——, D. R. Stauffer, and A. M. Lario-Gibbs, 1995: A multiscale four-dimensional data assimilation system applied in the San Joaquin Valley during SARMAP. Part I: Modeling design and basic performance characteristics. J. Appl. Meteor.,34, 1739–1761.

  • Shapiro, R., 1970: Smoothing, filtering and boundary effects. Rev. Geophys. Space Phy.,8, 359–387.

  • Stauffer, D. R., and N. L. Seaman, 1990: Use of four-dimensional data assimilation in a limited-area mesoscale model. Part I: Experiments with synoptic-scale data. Mon. Wea. Rev.,118, 1250–1277.

  • ——, and J.-W. Bao, 1993: Optimal determination of nudging coefficients using the adjoint technique. Tellus,45A, 358–369.

  • ——, and N. L. Seaman, 1994: Multiscale four-dimensional data assimilation. J. Appl. Meteor.,33, 416–434.

  • ——, ——, and F. S. Binkowski, 1991: Use of four-dimensional data assimilation in a limited-area mesoscale model. Part II: Effects of data assimilation within the planetary boundary layer. Mon. Wea. Rev.,119, 734–754.

  • ——, ——, T. T. Warner, and A. M. Lario, 1993: Application of an atmospheric simulation model to diagnose air-pollution transport in the Grand Canyon region of Arizona. Chem. Eng. Comm.,121, 9–26.

  • ——, ——, G. K. Hunter, S. M. Leidner, A. Lario-Gibbs, and S. Tanrikulu, 2000: A field-coherence technique for meteorological field-program design for air quality studies. Part I: description and interpolation. J. Appl. Meteor.,39, 297–316.

  • Tanrikulu, S., D. R. Stauffer, N. L. Seaman, and A. J. Ranzieri, 2000:A field-coherence technique for meteorological field-program design for air quality studies. Part II: evaluation in the San Joaquin Valley. J. Appl. Meteor.,39, 317–334.

  • Todling, R., S. E. Cohn, and N. S. Sivakumaran, 1998: Suboptimal schemes for retrospective data assimilation based on the fixed-lag Kalman filter. Mon. Wea. Rev.,126, 2274–2286.

  • Wang, W., and N. L. Seaman, 1997: A comparison study of convective parameterization schemes in a mesoscale model. Mon. Wea. Rev.,125, 252–278.

  • Zhang, D.-L., and R. A. Anthes, 1982: A high-resolution model of the planetary boundary layer—Sensitivity tests and comparisons with SESAME-79 data. J. Appl. Meteor.,21, 1594–1609.

  • Zou, X., I. M. Navon, and F. X. LeDimet, 1992: An optimal nudging data assimilation scheme using parameter estimation. Quart. J. Roy. Meteor. Soc.,118, 1163–1186.

  • Zupanski, D., 1997: A general weak constraint applicable to operational 4DVAR data assimilation systems. Mon. Wea. Rev.,125, 2274–2292.

  • View in gallery
    Fig. 1.

    SWOBS region of influence at two selected observation sites, A (Great Falls, MT) and B (Little Rock, AR). The background field is an analysis of 850-hPa temperature, valid 0000 UTC 14 Jan 1992. Contour interval is 2°C. The radius of influence R is 750 km, and the threshold ΔT is 20°C.

  • View in gallery
    Fig. 2.

    Experimental domains for the OSSE. (a) The 30-km, 169 × 181 domain for the truth run. (b) The 90-km, 43 × 55 domain for the OSSE experiments. (c) The verification domain. Vertical cross sections are analyzed along A–B.

  • View in gallery
    Fig. 3.

    Vertical comparison of averaged rms errors in the OSSE initial conditions to the NGM winter averages. (top) Rms error for vector wind analyses and (bottom) rms error for temperature analyses. The NGM winter averages are plotted as solid boxes, the OSSE initial conditions without intentional degradation are plotted as open circles, and the OSSE initial conditions with a 180-km phase lag and additional smoothing are plotted as open triangles.

  • View in gallery
    Fig. 4.

    The NCEP 500-hPa analysis of height and temperature. Height is in dam, temperature and dewpoint depression in °C, height tendency in dam (12 h)−1, and wind speed in kt. Contour interval for height contours is 60 m. Contour interval for isotherms is 5°C. Valid (a) 0000 UTC 13 Jan, (b) 0000 UTC 14 Jan, and (c) 0000 UTC 15 Jan 1992.

  • View in gallery
    Fig. 5.

    The 500-hPa height “analysis” from the OSSE truth run. Contour interval is 60 m. Valid (a) 0000 UTC 14 Jan and (b) 0000 UTC 15 Jan 1992.

  • View in gallery
    Fig. 6.

    Hourly rms errors contrasting expts CTRL, CIRC, and SWOBS. The lines connecting open circles represent CTRL forecast beginning at t = 0 h. The diamonds represent expt CIRC, and the cross hatches represent expt SWOBS. The dynamic initialization begins at t = −12 h and the subsequent forecast begins at t = 0 h. The (a) 500-hPa vector wind, (b) 300-hPa vector wind, (c) 850-hPa temperature, (d) 500-hPa temperature, and (e) 700-hPa mixing ratio.

  • View in gallery
    Fig. 7.

    The 500-hPa height errors following dynamic initialization, valid 0000 UTC 14 Jan 1992 (t = 0 h). Contour interval is 10 m. Shading represents error in excess of 20 m. (a) Experiment CIRC. (b) Experiment SWOBS.

  • View in gallery
    Fig. 8.

    South–north cross section of potential temperature errors near the upper-low center following dynamic initialization, valid 0000 UTC 14 Jan 1992 (t = 0 h). Contour interval is 0.5 K. Shading represents error with magnitude in excess of 2 K. Vertical coordinate is pressure in hPa. Refer to Fig. 2 for location of cross section A–B. (a) Experiment CIRC. (b) Experiment SWOBS.

  • View in gallery
    Fig. 9.

    South–north cross section of wind speed errors near the upper-low center following dynamic initialization, valid 0000 UTC 14 Jan 1992 (t = 0 h). Contour interval is 2 m s−1. Shading represents error with magnitude in excess of 4 m s−1. Vertical coordinate is pressure in hPa. Refer to Fig. 2 for location of cross section A–B. (a) Experiment CIRC. (b) Experiment SWOBS.

  • View in gallery
    Fig. 10.

    Hourly correlation coefficient (r2) statistics contrasting expts CIRC and SWOBS verified against the truth run. The open circles represent expt CIRC, and the diamonds represent expt SWOBS. The dynamic initialization begins at t = −12 h and the subsequent forecast begins at t = 0 h. The (a) 500-hPa wind speed on full domain, (b) 500-hPa wind speed on verification domain, (c) 500-hPa temperature on full domain, and (d) 500-hPa temperature on verification domain.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 61 25 6
PDF Downloads 25 7 0

A Heuristic Study on the Importance of Anisotropic Error Distributions in Data Assimilation

Tanya L. OtteAtmospheric Sciences Modeling Division, Air Resources Laboratory, National Oceanic and Atmospheric Administration, Research Triangle Park, North Carolina

Search for other papers by Tanya L. Otte in
Current site
Google Scholar
PubMed
Close
,
Nelson L. SeamanDepartment of Meteorology, The Pennsylvania State University, University Park, Pennsylvania

Search for other papers by Nelson L. Seaman in
Current site
Google Scholar
PubMed
Close
, and
David R. StaufferDepartment of Meteorology, The Pennsylvania State University, University Park, Pennsylvania

Search for other papers by David R. Stauffer in
Current site
Google Scholar
PubMed
Close
Full access

Abstract

A challenging problem in numerical weather prediction is to optimize the use of meteorological observations in data assimilation. Even assimilation techniques considered “optimal” in the “least squares” sense usually involve a set of assumptions that prescribes the horizontal and vertical distributions of analysis increments used to update the background analysis. These assumptions may impose limitations on the use of the data that can adversely affect the data assimilation and any subsequent forecast.

Virtually all widely used operational analysis and dynamic-initialization techniques assume, at some level, that the errors are isotropic and so the data can be applied within circular regions of influence around measurement sites. Whether implied or used directly, circular isotropic regions of influence are indiscriminate toward thermal and wind gradients that may reflect changes of air mass. That is, the analytic process may ignore key flow-dependent information available about the physical error structures of an individual case. Although this simplification is widely recognized, many data assimilation schemes currently offer no practical remedy.

To explore the potential value of case-adaptive, noncircular weighting in a computationally efficient manner, an approach for structure-dependent weighting of observations (SWOBS) is investigated in a continuous data assimilation scheme. In this study, SWOBS is used to dynamically initialize the PSU–NCAR Mesoscale Model using temperature and wind data in a series of observing-system simulation experiments. Results of this heuristic study suggest that improvements in analysis and forecast skill are possible with case-specific, flow-dependent, anisotropic weighting of observations.

*On assignment to National Exposure Research Laboratory, U.S. Environmental Protection Agency, Research Triangle Park, North Carolina.

Corresponding author address: Ms. Tanya L. Otte, U.S. EPA/NERL/AMD, MD-80, Research Triangle Park, NC 27711.

Email: tlotte@hpcc.epa.gov

Abstract

A challenging problem in numerical weather prediction is to optimize the use of meteorological observations in data assimilation. Even assimilation techniques considered “optimal” in the “least squares” sense usually involve a set of assumptions that prescribes the horizontal and vertical distributions of analysis increments used to update the background analysis. These assumptions may impose limitations on the use of the data that can adversely affect the data assimilation and any subsequent forecast.

Virtually all widely used operational analysis and dynamic-initialization techniques assume, at some level, that the errors are isotropic and so the data can be applied within circular regions of influence around measurement sites. Whether implied or used directly, circular isotropic regions of influence are indiscriminate toward thermal and wind gradients that may reflect changes of air mass. That is, the analytic process may ignore key flow-dependent information available about the physical error structures of an individual case. Although this simplification is widely recognized, many data assimilation schemes currently offer no practical remedy.

To explore the potential value of case-adaptive, noncircular weighting in a computationally efficient manner, an approach for structure-dependent weighting of observations (SWOBS) is investigated in a continuous data assimilation scheme. In this study, SWOBS is used to dynamically initialize the PSU–NCAR Mesoscale Model using temperature and wind data in a series of observing-system simulation experiments. Results of this heuristic study suggest that improvements in analysis and forecast skill are possible with case-specific, flow-dependent, anisotropic weighting of observations.

*On assignment to National Exposure Research Laboratory, U.S. Environmental Protection Agency, Research Triangle Park, North Carolina.

Corresponding author address: Ms. Tanya L. Otte, U.S. EPA/NERL/AMD, MD-80, Research Triangle Park, NC 27711.

Email: tlotte@hpcc.epa.gov

1. Introduction

One of the most challenging problems in numerical weather prediction (NWP) is to optimize the use of the rapidly expanding asynoptic meteorological database in dynamic initialization. This difficulty was recognized a quarter century ago by Hoke and Anthes (1976), who stated, “We are often limited not so much by a lack of information, but by our inability to use all the information available in a manner consistent with the model.” Four-dimensional data assimilation (FDDA) is one way to include asynoptic data in meteorological model initialization to ultimately improve forecast skill. FDDA systems that incorporate observational information into dynamical models have been under development since the late 1960s (e.g., Charney et al. 1969). Historical overviews of FDDA have been presented by McPherson (1975) and Harms et al. (1992), and a detailed account of data assimilation theory is given in Daley (1991). Today, FDDA systems for dynamic initialization are integral parts of most operational NWP models worldwide.

Two broad types of FDDA are currently used for operational and research applications. The first is the family of intermittent assimilation methods that uses a short-term forecast (usually 1–12 h) as a first guess in an objective analysis step within an “update cycle.” In an intermittent FDDA system, the first-guess forecast (i.e., background) is enhanced by observations to produce the initial state for the next forecast cycle, and the procedure is repeated. The objective analysis is often performed using a statistically optimal method, such as three-dimensional variational analysis (3DVAR; e.g., Sasaki 1970; Lorenc 1986; Parrish and Derber 1992; Courtier et al. 1998) or optimum interpolation (OI; e.g., Gandin 1963; Lönnberg and Hollingsworth 1986). Intermittent FDDA is used at the National Centers for Environmental Prediction (NCEP) in the Rapid Update Cycle (Benjamin et al. 1994, 1998; Dévényi and Benjamin 1998), in NCEP’s Eta Data Assimilation System (Rogers et al. 1995), in NCEP’s Global Data Assimilation System (Parrish and Derber 1992; Parrish et al. 1997), and at the U.K. Met. Office (UKMO) in the Unified Model (Cullen 1993; Andrews 1997).

The second type of FDDA is continuous data assimilation, in which observations are dynamically assimilated into the model at all time steps through a designated period. This type of FDDA includes the adjoint method (e.g., Lewis and Derber 1985; LeDimet and Talagrand 1986) and Newtonian relaxation (nudging; e.g., Hoke and Anthes 1976). The adjoint method, based on four-dimensional variational analysis (4DVAR), assimilates data in a series of forward and backward integrations of the forecast model and its adjoint model. When used for model initialization, it produces an optimal initial state for the model that minimizes the sum of the squares of the errors between the model state and observations spread in space and time over the preforecast assimilation period. The adjoint approach normally assumes that the model perfectly represents all atmospheric processes. The successive iterations can cause this method to be quite expensive, and practical problems may exist in assuring that the solutions converge to a unique minimum-error state. An incremental 4DVAR technique (i.e., using reduced model resolution and simplified physics in the adjoint minimization cycles) has been operationally implemented at the European Centre for Medium-Range Weather Forecasts (ECMWF; Rabier et al. 2000; Klinker et al. 2000). However, 4DVAR currently is primarily used for research and quasi-operational purposes (e.g., Zupanski 1997; Pu et al. 1997a; Guo et al. 2000), and it is commonly viewed as the future goal of operational FDDA systems.

Similar to the intermittent FDDA methods, nudging assimilates data in a forward-only integration of a model (e.g., Lorenc et al. 1991). It normally does not guarantee a statistically optimal initial state, although optimal versions of the approach are possible (e.g., Zou et al. 1992;Stauffer and Bao 1993). Nudging relaxes the model’s time-dependent variables toward the observations during a specified period through a synthetic tendency term added into the prognostic equations. The synthetic tendency term is based on the continuously evolving difference between the model state and the observations. Despite its suboptimal design, the nudging scheme is a versatile and efficient method of continuous data assimilation that is widely used for research applications, especially in air quality studies (e.g., Stauffer et al. 1993;Seaman et al. 1995; Fast 1995; Parsons and Dudhia 1997; Leslie et al. 1998; Michelson and Seaman 2000, Seaman 2000; Stauffer et al. 2000; Tanrikulu et al. 2000).

Virtually all common analysis and dynamic-initialization techniques assume, at some level, that the innovation vector (i.e., observation increment, or the vector of observations minus the background interpolated to the observation sites) can be applied within circular horizontal regions of influence surrounding the measurement sites. This includes many intermittent assimilation systems, which may be based on 3DVAR or OI, as well as continuous assimilation schemes, such as 4DVAR and nudging. That assumption, however, may impose artificial limitations on the use of the data that can adversely affect the data assimilation and subsequent forecasts. Whether implied or used directly, circular isotropic regions of influence are indiscriminate toward temperature and wind gradients that may reflect changes of air mass. That is, the analytic process normally ignores valuable flow-dependent information about the error structures of individual cases. A notable exception, the Kalman filter (e.g., Cohn et al. 1994; Evensen 1997), allows the error-covariance (observational weighting) structure to evolve in space and time and be truly flow dependent. Kalman filter techniques require enormous computational expenses and may be impractical for current operational implementation (e.g., Houtekamer and Mitchell 1998). However, simplified“suboptimal” versions of this technique are currently being investigated (e.g., Todling et al. 1998; Burgers et al. 1998), and technological advances (parallel programming and faster hardware) may make Kalman filters more attractive for operational purposes in the future.

The assumption of homogeneous, isotropic, and otherwise flow-independent error characteristics is a common simplification often justified by operational requirements, yet it is recognized as a fallacy of many current operational data assimilation systems. For example, Daley (1991) noted that there is “substantial anisotropy” in the climatological observed-minus-background correlation patterns in the 500-hPa geopotential over Australia that is physically consistent with the characteristic tilt of the transient trough–ridge patterns and hence the dynamic transports of heat and momentum in the atmospheric circulation. At the 1995 International Symposium on Assimilation of Observations in Meteorology and Oceanography, one of the themes of the panel discussion (summarized by Kalnay et al. 1997) was the need to include “errors of the day” in atmospheric data assimilation. In fact, J. Purser stated in Kalnay et al. (1997) that the use of homogeneous and isotropic background covariances is a “major deficiency” of assimilation systems and listed this among his top priorities for improving data assimilation. Houtekamer and Mitchell (1998) called the use of generally homogeneous and isotropic error statistics a “serious approximation” and a “shortcoming” of data assimilation. Errico (1999) also commented on the need for “realistic observational error characteristics, including shapes of distribution functions” to successfully assimilate satellite data. However, it may be difficult, if not impossible, to specify a priori the structure of the error covariances.

A handful of studies (other than Kalman filtering) have examined the use of various forms of anisotropic spatial weighting functions in an effort to assimilate data where they are “more appropriate.” For example, in an OI system Lanzinger and Steinacker (1990) incorporated orography into their calculation of anisotropic correlation (weighting) functions with “barriers” for lower-tropospheric analyses of Montgomery potential and geopotential height. Dévényi and Schlatter (1994) spread their OI observational increments along isentropic surfaces where the flow patterns are expected to be more coherent. Miller and Benjamin (1992) used elevation and potential temperature differences between the observation site and the target grid point in their horizontal correlation functions for analyzing surface potential temperature, wind, and humidity. The effective distance of an observation from a given grid point was made greater when the potential temperature difference was large, and this resulted in a much lower weight for an observation located, for example, across an airmass boundary.

Stauffer and Seaman (1994) used a complex terrain case in the southwest United States to demonstrate that observation nudging can also be applied effectively to surface data by using anisotropic weighting functions based on differences in surface pressure (terrain height) to account for the reduced representativeness due to orography. Schraff (1997) showed that the observation-nudging approach of Stauffer and Seaman (1994), with its spreading along isobaric surfaces rather than model sigma (terrain following) surfaces, yielded positive results similar to those from spreading along isentropes (and along pressure surfaces during neutral or unstable conditions) for two complex terrain cases in the alpine region. This study concluded that the involvement of case-specific dynamics in a continuous assimilation process probably makes nudging less sensitive to the details of the spatial spreading than 3D analysis methods such as OI. Pu et al. (1997b) used vector-breeding techniques to compute daily error statistics in NCEP’s global model, and they found that bred-vector ensembles can effectively capture the “errors of the day.” Errico (1999) also noted recent efforts to develop more realistic, flow-dependent error correlation models for variational assimilation.

The primary objective of this study is to explore the potential of a case-specific, structure-dependent weighting approach to continuous data assimilation. Although neither nudging nor 4DVAR is widely used in operational NWP systems, many operational centers set 4DVAR-based data assimilation as a future goal. Head-to-head studies that compare current variational and nudging schemes have shown that variational methodology is comparable to or slightly better than nudging (3% lower error), but at the cost of at least an order of magnitude more computational time (e.g., LeMarshall et al. 1997; Leslie et al. 1998). In fact, Leslie et al. (1998) contended that nudging-based systems that incorporate hourly observations might even be candidates for the next-generation operational forecast systems due to their practicality. This paper uses the more computationally efficient nudging method to demonstrate a formulation of anisotropic weighting for observations in a continuous data assimilation system. The approach, which will be referred to as structure-dependent weighting for observations (SWOBS), is applied to dynamically initialize the Pennsylvania State University–National Center for Atmospheric Research (PSU–NCAR) Mesoscale Model using temperature and wind fields in an observing-system simulation experiment (OSSE). The purpose of this paper is not to advocate the utility of nudging or to compare it with other methods of FDDA. Rather, it is intended to demonstrate the potential added value of using anisotropic, noncircular, case-specific, and flow-dependent observation weighting functions.

2. Model description

The PSU–NCAR Mesoscale Model used for this study evolved from the work of Anthes and Warner (1978) and is documented in Grell et al. (1994). The two main components of interest here are the prognostic model and the FDDA scheme.

a. The prognostic model

A full-physics version of the PSU–NCAR Mesoscale Model is used in this study. The prognostic variables include the horizontal wind components, temperature, mixing ratio, and surface pressure. The boundary layer is parameterized using a multilayer Blackadar scheme (Zhang and Anthes 1982; Grell et al. 1994) that includes surface fluxes of heat, moisture, and momentum, and a nonlocal closure for convectively unstable conditions. The moisture cycle includes convective parameterizations (see section 4) and explicit equations for liquid/ice cloud and precipitation (Dudhia 1989). The grid structure of the PSU–NCAR Mesoscale Model is based on the staggered Arakawa-B grid (Arakawa and Lamb 1977). The hydrostatic version (MM4) is described by Anthes et al. (1987) and is used to simulate the synoptic-scale event described in section 5.

b. Four-dimensional data assimilation

The standard PSU–NCAR data assimilation system in MM4 and its nonhydrostatic successor, MM5, uses the nudging approach, which can be applied to either gridded analyses or individual observations. For analysis nudging, observed states are defined for all grid points at specific time intervals (via gridded analyses) and are temporally interpolated between those times (Stauffer and Seaman 1990; Stauffer et al. 1991). Nudging toward individual observations (observation nudging), on the other hand, is particularly attractive for assimilating large quantities of asynoptic data because the data are applied over individualized intervals centered at the times for which they are valid. The SWOBS technique is demonstrated in this study through the continuous direct assimilation of data using the observation-nudging approach.

Observation nudging in the PSU–NCAR Mesoscale Model involves adding a weighted forcing term to the prognostic equations based on the spatial and temporal separation of the data from a grid point at a given time step (Stauffer and Seaman 1990). Wind, temperature, and mixing ratio can easily be used by the nudging scheme in any combination and with independent weighting coefficients to assimilate observations during a time window centered about each observation. The reciprocal of the “observation nudging coefficient” shown in Table 1 represents the e-folding time over which the model error will be reduced in the absence of any other model forcing. In general, the magnitude of this nudging term must be kept relatively small compared to the other model forcings. Further details on the physical considerations required to properly define the nudging weights or coefficients are given in Stauffer and Seaman (1990) and Stauffer et al. (1991). For example, in this study, nudging toward temperature and mixing ratio observations is restricted to regions above the planetary boundary layer (PBL) to preserve the model-simulated PBL structure (Stauffer et al. 1991).

The horizontal weighting in the observation-nudging equations is defined as a Cressman-type function (Cressman 1959) given by
i1520-0493-129-4-766-e1
where R is the horizontal radius of influence and D is the distance between the grid point and the ith observation. This function prescribes a circular region surrounding the observation, with radially decreasing values of wxy that range from 1 at the observation site to 0 at D = R. In the standard observation-nudging scheme (Stauffer and Seaman 1994), the data are applied quasi-horizontally at grid points within R = R(p), which varies with the pressure (height) of the observation. The radius of influence is initially defined at the surface, Rs, and it expands linearly with decreasing pressure to a constant value of R′ = 2Rs at and above 500 hPa.
Surface observations can be assimilated in the PSU–NCAR Mesoscale Model on constant-sigma surfaces (the terrain-following vertical levels) with a modified Cressman distance-weighting function that restricts the impact of the observations as a function of the horizontal variation in surface pressure (Stauffer and Seaman 1994). This permits a horizontally continuous application of FDDA in complex terrain. The distance D in the Cressman-type function for observations given by (1) is replaced with a modified distance, Dm, defined as
DmDRsC−1mpsops
where ps is the surface pressure, Rs is the radius of influence for the surface observations, subscript o refers to the observations, and Cm is a constant that represents a maximum tolerance for the pressure difference over which the data are assumed to be valid. In this study Cm = 75 hPa. As the difference between the surface pressure of the observation site and the surface pressure at the applicable grid point approaches Cm, the second term in (2) approaches Rs and the numerator of (1) rapidly approaches zero. This formulation is not only useful for restricting the assimilation area for observations in complex terrain, but it also forms the basis for SWOBS.

3. Structure-dependent weighting for observations

The most important element in an “optimal” (i.e., statistical) interpolation of observation corrections (i.e., the innovation vector) to a grid is the background error covariance (e.g., Daley 1991). The form of this matrix largely governs the resulting objective analysis. The weighting matrix Ki, which multiplies the innovation vector to produce an analysis increment that minimizes the analysis error variance at time level ti [using notation following Ide et al. (1997)], is given by
KiPftiHTiHiPftiHTiRi−1
where Pf(ti) is the background (model forecast) error covariance at time ti, the Hi matrix is a linearized observation operator that transforms variables from model space to observation space, and Ri is the observation error covariance (i.e., the covariance of instrumental error and representativeness error). Equation (3) shows that as the magnitude of the elements of Pf(ti)HTi (scaled by an appropriate scalar constant) increase between an observation site and a given analysis variable grid point, the weight at which the innovation vector is applied at that grid point increases. The shape of the data spreading can be largely controlled by Pf(ti)HTi. The inverse term in (3), HiPf(ti)HTi + Ri, controls the magnitude of the analysis increment by scaling the innovation vector by its covariance. In other words, for a given location, as the scaled background error covariance elements increase relative to the elements of the observation error covariance, the observation increment will be weighted more strongly in the analysis. Conversely, as the observation error increases relative to the background error, our confidence in the observation decreases and the magnitude of the analysis increment (the product of theKi weighting matrix and the innovation vector) decreases.

For this study, the weighting function at a given time will vary only in the horizontal. We will make the common assumption that the background error variance is homogeneous (Daley 1991). By definition, the background error covariance is specified as the product of the background error variance and a background error correlation function. Instead of using an isotropic function to represent the latter, we will assume that the azimuthal dependence of the background error correlation function can be derived from the background gradients as described below. Thus, the background error covariance may be anisotropic based on case-specific, flow-dependent characteristics of the background so that the observation increments will not be applied using the simple standard circular weighting functions. Fischer et al. (1998) studied the dynamics of forecast error covariances in an idealized baroclinic flow using a Kalman filter. They reported that flow-dependent correlations allowed for “a better damping of error growth than the initial isotropic correlations.”

In the design of the structure-dependent weighting for observations, we do not claim that the background field and its gradients are free from errors. In fact in the present study, as in operational applications, the background field contains significant error. (Section 4 describes how the innovation vector has been deliberately degraded in this OSSE case.) Rather, in agreement with recent concepts such as the errors of the day discussed by Kalnay et al. (1997), we hypothesize that a background error covariance structure that ignores case-specific information (e.g., airmass boundaries) actually can lead to an increase in local analysis errors and subsequent prediction errors, even in an optimal analysis framework. This hypothesis was proven in the idealized study of Fischer et al. (1998) that showed significant improvement in an analysis and subsequent forecasts when dynamically consistent covariances were chosen instead of the conventional (i.e., isotropic) analytical formulations. They also indicated that for the mesoscale both the error structures and the sensitivity fields become even more complex than at larger scales, but the variances adapt themselves to the main mesoscale features. They concluded that the major difficulty is to obtain these dynamically reshaped correlations via a computationally tractable method on the mesoscale.

The anisotropic information we seek to add to the data assimilation is contained in gradients of the analysis variables, which are recognized as fronts, jet stream boundaries, strong height gradients, etc. Specifically, it is hypothesized that the background error distributions are more strongly correlated in directions perpendicular to the gradient of the analysis variable than they are parallel to that gradient. For example, the horizontal error structure associated with a front that is moving too slowly is likely to appear elongated and oriented parallel to the axis of the front. The validity of this proposition is supported by the nearly universal practice among manual analysts who quickly learn not to use data located on one side of an atmospheric boundary to correct a variable field (background) on the other side of the boundary just a short distance away. The same concept has been employed successfully in a number of objective analysis techniques discussed in section 1 and was used specifically by Benjamin and Seaman (1985) for reducing analysis errors in upper-level wind fields. Thus, even though the background fields, and hence their gradients, are imperfect, they nevertheless should contain valuable information about the likely anisotropic distribution of the background errors.

SWOBS is demonstrated in this study using observation nudging, which is traditionally performed in the horizontal using a circular radius of influence centered at the observation site to apply the innovation vector, perhaps using a Cressman-type distance weighting as described in section 2. This standard “influence function” is indiscriminate toward temperature and wind gradients that may reflect a change of air mass, such as is typical at frontal boundaries or in coastal zones. To reduce spreading the influence of data (i.e., the innovation vector) into areas where the error characteristics are not representative of local conditions, SWOBS modifies the circular region of influence using the simulated gradient structure for the appropriate variables. Thus, this influence function is based on the structure of the variable field, assuming that it also contains information on the structure of its error covariances.

Specifically, for a given variable α, the modified Cressman-type weight for an observation reflects the difference between the model-generated solution at the observation site and the background value at surrounding grid points. The relationship is expressed according to
DmDRαm−1α̂iα̂
where D is the distance between the grid point i and the observation site o, R is the standard radius of influence, α̂i is the model-generated value of the prognostic variable α at grid point i, α̂ is the background value interpolated to the observation site, and Δαm is the maximum tolerance (i.e., threshold) for the prognostic variable. The threshold value Δαm adjusts the degree of eccentricity of the region of influence. The value Dm then replaces D in the calculation of the circular Cressman weight in (1). The result is a weighting pattern that reflects the gradients (i.e., structure) of the background field. Thus, for example, at grid points for which D = 0.5R, the model field is not nudged toward the observations when |α̂iα̂| exceeds one-half the threshold value of Δαm.

Figure 1 illustrates the impact of (4) applied in (1) at two sites using a typical background temperature field at 850 hPa and a standard radius of influence of 750 km. The standard radius could be defined, for example, on the basis of historic error covariance patterns. The region of influence for observation A, located in a weak thermal gradient, remains nearly circular. However, the weighting structure calculated for observation B, located in a strong gradient associated with a frontal zone, is more elliptical. Notice that the region of influence depicted for B has a serpentine major axis that reflects the particular structure of the isotherms in this case.

SWOBS is applied in this study to the wind and temperature observations above the surface, although it could be adapted to other variables such as moisture. Additionally, while the structure-dependent weighting is univariate in this application, the approach could be applied in a multivariate scheme (e.g., Daley 1991), as well.

4. Experimental design

A thorough evaluation of the SWOBS technique would require using real data and running many cases with a variety of data types. Therefore, for this preliminary study we reduce the uncertainty associated with the limited availability of high-frequency (e.g., 3D hourly) data in real cases by using OSSEs to evaluate SWOBS. The basic OSSE approach is to let a “high quality” numerical model simulation represent the atmosphere in three spatial dimensions plus time. Subsequent model experiments are designed using different numerics or physics and the results are compared to the original, perfectly known pseudoatmosphere (the “truth run”). Since the knowledge of the modeled atmosphere in the truth run is “perfect,” aspects of the altered modeling system in the subsequent experiments, such as the dynamic-initialization technique, can be evaluated in an efficient and unambiguous manner.

OSSEs can be useful tools for investigating new data sources or assimilation strategies. However, a key requirement is that the models used to generate the truth run and the subsequent experiments must be reasonably independent, at least with respect to the characteristics being evaluated. If they are not independent, the experimental solutions may look much like those of the truth run, but only because the models themselves are so similar. This is known as the “identical twin” problem (e.g., Arnold and Dey 1986; Atlas 1997). In addition, OSSEs are more credible when they contain realistic errors and biases as might be expected in real-data cases.

Typically, model independence is established by using different resolutions (e.g., Kuo and Guo 1989), independent physical parameterizations, or even different numerical models (e.g., spectral versus gridpoint models). In this study we seek effective independence in part by using different grid resolutions and model physics. In addition, we impose initial states for the experiments that are very different from the state of the truth run, in much the same way that the initial state of an operational forecast model is only an approximation to the true atmospheric state. Although the identical twin problem may not be fully eliminated here, the impact of using the same dynamical model is minimized through these changes and the deliberate inclusion of error in the OSSE initial fields and observations. In addition, we assume that the model configurations for the truth run and the OSSEs are capable of characterizing typical meteorological flows with sufficient accuracy for our purposes.

The truth run in the OSSE is a 60-h simulation on a 30-km domain (Fig. 2, domain a) with 169 × 181 grid points and 31 sigma layers (8 below 850 hPa). The truth run has a time step of 45 s and is started 24 h prior to the “forecast” period of the subsequent 36-h forecasts, or from t = −24 h to t = +36 h. Physical parameterizations include the Kain–Fritsch deep convection scheme (Kain and Fritsch 1990, 1993), explicit moisture physics (Dudhia 1989), and the Blackadar PBL scheme (Zhang and Anthes 1982). The truth run is not only used for validation of the subsequent experimental runs, but is also the source of the “observations” used in the dynamic initialization of those runs.

The experimental domain for the OSSEs (Fig. 2, domain b) is a subdomain of the 30-km truth-run domain with a 90-km horizontal resolution, 43 × 55 grid points, and 16 sigma layers (5 below 850 hPa). Most of the OSSEs include a 12-h dynamic-initialization period (t = −12 h to t = 0 h), followed by a 36-h forecast (t = 0 h to t = +36 h). The first 12 h of the truth run (t = −24 h to t = −12 h) are discarded to prevent early spinup errors (e.g., Wang and Seaman 1997) from contaminating the OSSEs. Consistent with their coarser resolution, the OSSEs use the Anthes–Kuo convection parameterization (Anthes 1977; Anthes et al. 1987) and no explicit moisture physics. The time step for the OSSEs is 135 s and the Blackadar PBL is used. Thus, the experimental model uses less sophisticated physics for convection and resolved-scale precipitation, as well as lower horizontal and vertical resolutions, and a smaller domain. These characteristics help to establish independence between the truth-model and experimental-model solutions.

The initial conditions for all OSSEs were derived from the truth run. In addition to the simpler physics and lower resolution, we further reduced the likelihood of the identical twin problem by providing the experiments with “degraded analyses” as the first guess (background) before the objective analysis step. That is, mesoscale detail was removed from the truth-run fields so that the 90-km model and the dynamic-initialization procedures of SWOBS must reconstruct the smaller-scale features in order to show skill. To generate the background meteorological fields, information was first extracted from the truth-run solutions at 270-km intervals over the reduced area of the experimental grid shown in Fig. 2 (domain b). The 270-km gridded fields were then bilinearly interpolated to the 90-km domain and filtered with a smoother–desmoother (Shapiro 1970). Nonpredictive fields, such as snow cover, were projected directly to the 90-km domain from the truth run. Finally, an arbitrary phase error (i.e., a 180-km westward shift) was introduced into the background fields prior to performing an objective analysis, completing the intentional degradation of the background used in the innovation vector and in the lateral boundary conditions.

The initial and boundary conditions for the OSSEs were then calculated using the objective analysis described by Benjamin and Seaman (1985). In this step, the intentionally degraded background field was enhanced using “observations” extracted from the truth-run solutions at 450-km intervals. This data density simulates the typical spacing of radiosonde observations over North America. Next, the observations were intentionally degraded by interpolating from the truth-run sigma surfaces to mandatory and user-defined significant pressure levels. Once the observations were added and the final analysis was interpolated back to the OSSE’s sigma levels, most of the phase shift in the background fields was corrected, much like occurs in operational conditions. By design, however, the remaining errors had spatial distributions and magnitudes roughly similar to those found in the initial conditions of similar-scale operational models.

Figure 3 illustrates the impact of intentionally degrading the background fields by comparing vertical distributions of averaged root-mean-square (rms) errors in the three types of model initial conditions generated for the winter season: final (objectively analyzed) OSSE initial fields without intentional degradation, final OSSE initial conditions with the phase-lag error and additional smoothing in the background fields, and NCEP’s Nested Grid Model (NGM), which has a comparable resolution to the OSSEs. The two sets of OSSE statistics are averaged from sample 0000 and 1200 UTC initializations in January 1994. The NGM statistics are averaged from February 1993 and January 1994 (S. Benjamin 1994, personal communication). Figure 3 shows that the OSSE initial conditions without the intentional degradation had rms errors significantly smaller than those found in the NGM for the winter season (a consequence of the identical twin problem). By imposing the phase-lag error and additional smoothing on the background fields, however, the final rms error distributions for both wind and temperature are generally similar to those of the NGM, even though the phase-lag error was mostly removed by the objective analysis. In addition, correlation coefficients were computed relative to the “truth” state for the degraded and nondegraded fields as a function of pressure. These statistics indicate correlations of 0.89–0.96 for the nondegraded wind field, but only 0.66–0.88 for the degraded wind field, with the stronger correlations occurring at lower pressures. The correlations for the degraded initial temperature state were also lower than for the nondegraded initial conditions, as expected, but both types had much higher correlations (0.982–0.996) throughout the depth of the model atmosphere.

The observations extracted from the truth run for data assimilation are degraded in these experiments by projecting them onto the model’s 16 vertical levels. In the OSSEs, the upper-air pseudo-observations are assimilated during the 12-h dynamic-initialization period at 3-h intervals (450-km resolution). At the surface, pseudodata are assimilated at hourly intervals and at 150-km resolution. The lateral boundary conditions for the OSSEs were generated using degraded (smoothed) 90-km fields with a phase-lag error, as in the first-guess fields.

Table 1 summarizes the nudging characteristics used for each of the six OSSE experiments presented in this paper. The first three are a statically initialized control experiment (CTRL) without FDDA, a dynamic-initialization experiment using circular radii of influence (CIRC), and a dynamic-initialization experiment using SWOBS (SWOBS). The other three are sensitivity experiments. In the first sensitivity experiment (ALPHA), the threshold values [Δαm in (4)] are modified to illustrate the sensitivity to this variable. In the final two experiments (LARGE and SMALL), the radii of influence were increased and decreased, respectively, to show sensitivity to R.

5. Case overview

A cold-season case is chosen to demonstrate the SWOBS dynamic-initialization technique. The strongly forced, high-amplitude case of 13–15 January 1992 featured a rapidly developing deep baroclinic storm. This type of case with strong gradients in the temperature and wind fields associated with a vigorous cold frontal passage is most likely to benefit from a technique such as SWOBS. Upper-level dynamics (a deepening 500-hPa trough over the central United States and a strong jet streak at 250 hPa with wind speed exceeding 75 m s−1) supported rapid intensification of the surface storm (more than 24 hPa in 24 h) while it was still over land. The fast-moving low produced moderate rains, squall lines, and tornadoes in Pennsylvania and Mississippi (Goodge 1992), while blizzard conditions and heavy snow were reported in the Midwest. Because of the importance of deep dynamics to the storm’s development, the discussion focuses on the upper levels.

At the beginning of the study period, 0000 UTC 13 January (hereafter denoted using the convention 13/00), a 1002-hPa surface low was forming in west-central Texas (not shown) ahead of a deep positively tilted 500-hPa trough over the Rocky Mountains (Fig. 4a). A warm, moist maritime air mass was poised over the Gulf of Mexico, marked by a surface warm front just north of the coastline (not shown). The 250-hPa jet streak, associated with the rapid intensification and propagation of the storm, was located in northern Mexico at this time (not shown).

By 14/00, the surface low had advanced to western Tennessee and had begun the rapid-deepening phase (minimum pressure of 995 hPa; not shown). Although the NCEP analysis shows the intensifying 500-hPa short wave associated with the storm had become negatively tilted and was still open at this time, the northerly wind at Little Rock, Arkansas, indicates that it had already started to form a closed center (Fig. 4b). The primary jet streak at 250 hPa had raced northeastward to the Ohio Valley by this time, maintaining its strength (not shown), while a secondary jet streak had formed over southern Texas.

The storm continued to accelerate and deepen rapidly over the next 24 h, reaching the Quebec–Vermont border as a 974-hPa low by 15/00 (not shown). Squall lines and severe weather had swept through the mid-Atlantic region ahead of the storm’s cold front, while heavy lake-effect snows occurred behind the storm in Ohio and Pennsylvania. The 500-hPa analysis at 15/00 (Fig. 4c) shows that the negatively tilted short wave over the Northeast had opened as it approached the deep upper-level low over northern Hudson Bay.

6. Results

a. Truth run

The 60-h truth run simulation begins at 13/00 (t = −24 h), while the experimental forecasts described in section 6b begin at 14/00 (t = 0 h). In the truth run at t = 0 h, a deep 500-hPa trough is located west of the Mississippi Valley, with a newly formed 5391-m closed low over southwestern Arkansas (Fig. 5a). This is consistent with the existence of a small low implied by the observed wind at Little Rock, Arkansas (Fig. 4b). At the surface, a 993-hPa low is centered at the Mississippi–Tennessee border (not shown). The storm at this time is about 60 km farther south and 2 hPa deeper than observed. Recall, however, that the truth run defines the“pseudo-observed” state for the OSSEs.

By 15/00 (t = +24 h), the truth run developed a sharp negatively tilted 500-hPa trough over the mid-Atlantic region, similar to the observed trough (cf. Figs. 5b and 4c). The simulated surface low at that time was located over northern New York with a central pressure of 958 hPa and it continued to track northeastward into Canada during the final 12 h of the study period (not shown). In this experiment design, direct longwave and shortwave radiation flux divergence within the atmospheric column was ignored, which caused the column to become too warm, especially in the deep-cloud region associated with the developing storm. The net warming contributed to pressure falls that were greater than observed. Although the truth run overdeepened the storm by 16 hPa at 15/00, its strong dynamical support captured the rapid intensification and track of the storm reasonably well. Therefore, the truth run, although imperfect, is a suitable basis for the subsequent OSSEs.

b. SWOBS OSSEs

The evaluation of the SWOBS OSSEs focuses primarily on the dynamic-initialization period (t = −12 h to t = 0 h) and the first 12 h of the model forecast (t = 0 h to t = +12 h), when the impact on skill is anticipated to be greatest. Measures of skill include correlation coefficients for wind speed and temperature, and rms errors for sea level pressure, height, temperature, vector wind difference (Stauffer and Seaman 1990), and mixing ratio. Additionally, precipitation skill in the OSSEs is evaluated using the threat score, bias score, and categorical forecast at 6-h intervals (Anthes et al. 1989). The 90-km OSSEs are verified over the region indicated in Fig. 2 (domain c) against the fields extracted from the truth run. The hourly rms errors are calculated on a point-by-point basis and then averaged over the verification domain at the surface, and 850, 700, 500, and 300 hPa.

Figures 6a and 6b show the hourly rms errors for the vector winds at 500 and 300 hPa from experiments CTRL (no FDDA), CIRC, and SWOBS for both the dynamic initialization and forecast periods. Hourly verification data are available at upper levels due to the nature of the OSSEs. Notice that errors in the dynamically initialized experiments, CIRC and SWOBS (both with assimilation of 3-hourly soundings and hourly surface observations), actually decrease prior to the start of the forecast because the model generates scales of motion not resolved in the smooth initial fields at t = −12 h. More importantly, the SWOBS technique consistently decreases the errors by 0.3–0.6 m s−1 across the verification domain and through the period compared to the standard circular weighting. This represents an error reduction of about 10%–20% from the total rms error of about 3–4 m s−1. Moreover, experiment SWOBS continues to reduce the rms errors in the predicted wind field by about 0.2–0.4 m s−1 through most of the first 12 h of the forecast period, compared to CIRC, and both dynamically initialized simulations are considerably more accurate than the statically initialized experiment, CTRL (no FDDA). These results are representative of the rms errors for winds across the verification domain at all levels. The trend for SWOBS to show improvement over CIRC (as well as CTRL) is also apparent in the mean wind errors (not shown).

A generally similar trend can be detected in the temperature and moisture, although the effect of using SWOBS is smaller for the mass than wind fields. At 850 hPa (Fig. 6c), there is no significant difference in the CIRC and SWOBS thermal fields. The reduction of rms temperature errors is about 5% (0.02–0.05°C) at 500 hPa (Fig. 6d). Recall that nudging of temperature and moisture is not performed in the PBL, and the radius of influence (R) is smaller at 850 hPa than at 500 hPa. The improvement in the 700-hPa mixing ratio (Fig. 6e) due to SWOBS is also unlikely to be significant. Nevertheless, these figures indicate that SWOBS did not inadvertently degrade the thermal and moisture fields in these experiments.

Another important evaluation of the SWOBS technique involves subjective examination of the spatial distribution of its influence on the solutions. For example, at the end of the dynamic initialization (t = 0 h), the 500-hPa height fields of experiments CIRC and SWOBS (not shown) reflect the large-scale pattern of the “analysis” extracted from the truth run (Fig. 5a). Figure 7 shows the 500-hPa height errors at 14/00 (t = 0 h) for these two experiments, compared to Fig. 5a. In Fig. 7a, the largest height errors in experiment CIRC (maximum of +30 m) are concentrated near the short-wave trough from northern Louisiana to western Tennessee. Figure 1 suggests that the SWOBS technique should have its greatest impact in this vicinity. Figure 7b demonstrates that even though height data are not assimilated directly and the domain-averaged rms errors in the mass field are only marginally affected by the SWOBS technique, the maximum height errors at the base of the trough are reduced to +18 m in experiment SWOBS. Figure 7 also shows that minor improvements in skill are gained with SWOBS throughout the simulation domain (e.g., cf. magnitudes of maximum errors in California and Utah) and are not strictly limited to the verification domain. In the development of the upper-level low, none of the three experiments (CTRL, CIRC, or, SWOBS) generated the 500-hPa closed low (see Fig. 5). However, comparison of the 5430-m contours clearly indicates that experiment SWOBS exhibits a stronger tendency to close the low than CIRC (not shown). Thus, the SWOBS technique has improved the dynamically initialized height field in the region of greatest importance for storm development, while differences between the two error fields are minimal in other areas of the domain where the gradients are less extreme. At 14/12 (t = +12 h), experiment SWOBS still maintains a small improvement in the 500-hPa height field over CIRC (up to 2 m), particularly in the region of the developing dynamic system (not shown).

The rms error statistics for height generally showed no appreciable difference between experiments SWOBS and CIRC (often less than 0.5 m; not shown) at any vertical level during the dynamic initialization and the early forecast. The sea level pressure evaluation at 14/00 showed a small improvement in the central pressure of the surface low (0.25 hPa) using SWOBS. The rms error statistics showed generally less than 0.05-hPa difference between CIRC and SWOBS through the first 12 h of the simulation (not shown).

Another perspective on the influence of SWOBS for these two experiments is shown in Figs. 8 and 9, which display the vertical distribution of potential temperature and wind errors at 14/00 (t = 0 h) along a cross section taken through the short-wave trough (see Fig. 2). Figure 8 indicates reductions of 0.5°–1.0°C in many of the localized error maxima and error minima and at many levels through the column. Although improvements do not occur in every location, they are fairly widespread and significant. In a similar way, examination of Fig. 9 generally shows reductions of wind errors in many areas by 0.5–1.0 m s−1, although a few zones show some error growth, as well. In general, FDDA techniques do not normally produce uniform improvements at all points in the domain at all times. The model’s response to the nudging terms in a given case is complex, the result of direct assimilation effects and mutual adjustments that occur simultaneously in the dynamical terms of the mass and wind equations. This response occurs primarily in the form of inertia–gravity waves. In this case, there appears to be a high degree of dynamic balance in the model solution following the dynamic-initialization period (e.g., no oscillations in Fig. 6). This is expected when using a properly designed nudging dynamic-initialization strategy because the corrections to the model solution are generally small and applied gradually. Despite the minor degradation of skill in parts of the region of interest, the overall effect of the SWOBS technique in this case tends to be an improvement in skill.

Figure 10 shows the hourly evolution of the correlation coefficient (r2) for experiments CIRC and SWOBS compared to the truth run for the dynamic-initialization period and the first 12 h of the forecast. Recall that hourly 3D verification fields are available due to the nature of the OSSEs. Figures 10a and 10b show the 500-hPa wind speed correlation for the full domain (Fig. 2, domain b) and for the verification domain (Fig. 2, domain c), respectively. Figures 10c and 10d are the same, but for 500-hPa temperature. Figure 10 shows a mostly consistent improvement in the correlation in SWOBS as opposed to CIRC in both fields and across both domains. Although the absolute magnitude of the differences is often small, the relative magnitude of the correlation coefficients is more important. Subtle improvements are expected with SWOBS since it is a refinement of the application of FDDA in CIRC. Figure 10 also illustrates that improvements can be seen in both the mass and momentum fields at the same level using SWOBS; that is, neither field is necessarily gaining improvement in skill at the expense of degrading skill in the other. Finally, Fig. 10 shows that the improvements with SWOBS over CIRC occur not only in the region of the largest gradients (i.e., the verification domain), but occur throughout the forecast domain. In addition, the impacts of using SWOBS were clearly apparent through the first 12 h of the forecast. This further substantiates our claim that case-specific, flow-dependent, anisotropic weighting of observations has value in data assimilation.

Thus, the horizontal and vertical error fields show that the SWOBS technique tends to produce improvements in statistical skill by reducing errors throughout the column, primarily in regions of strong gradients associated with the trough and storm intensification. The evaluation also suggests that the model can retain some positive impact up to 12 h into the forecast. This result in the dynamic initialization is consistent with the design hypothesis that background error distributions tend to be correlated with the gradient information in the variable fields. It confirms the importance of developing assimilation methods that are case dependent and feature based.

Table 2 summarizes the impact on 6-hourly precipitation statistics for the 0.025-cm (about 0.01 in.) threshold in experiments CTRL, CIRC, and SWOBS. For this light rain threshold, there is no significant difference in the forecast precipitation between the two dynamically initialized experiments, although both produce generally small improvements in skill compared to experiment CTRL, especially in the first 6 h of the forecast period. This tendency to improve the early precipitation fields is characteristic of many dynamic initializations due to reduction of the spinup problem in the model’s moisture fields (e.g., Wang and Seaman 1997). Overall, the generally good scores produced for all three experiments are attributed to the strong dynamics of this case, which act to organize the precipitation evolution. The decline in the threat score statistics in the later hours of the forecast period can be attributed, in part, to the propagation of the storm outside of the verification domain, as well as to the natural degradation of the forecast skill with time. The results are similar for heavier precipitation amounts (not shown).

Last, three sensitivity experiments are performed, each of which represents a variation on experiment SWOBS (see Table 1). In experiment ALPHA, the weighting threshold used to define the tolerance for the gradients of wind and temperature, Δα, is reduced to one-half of its value in experiment SWOBS. The effect of this change is to reduce the area over which the data are used when the local gradients are large. This change produced moderate, but consistent, increases in the rms errors for experiment ALPHA compared to experiment SWOBS (0.00–0.14 m s−1 for 500 hPa wind and 0.00–0.03°C for 500-hPa temperature between t = −12 h and t = +12 h; not shown). Thus, some care must be taken when setting the values of Δα so that the data assimilation is not unintentionally restricted to too small a region.

In the two final sensitivity runs, experiments LARGE and SMALL, the effect of the standard radius of influence (independent of the local gradient) is evaluated. In experiment LARGE, the circular base Rs is increased from 375 to 600 km (so R = 1200 km above 500 hPa);in experiment SMALL, it is decreased to 270 km (R = 540 km above 500 hPa). A comparison of results from experiments LARGE and SWOBS indicates that the large standard Rs consistently produces somewhat greater rms errors in the 500-hPa winds during the dynamic initialization and the first 12 h of the forecast (4%–13%, including nine consecutive hours sustained at 0.30 m s−1 rms error difference). The rms errors for 500-hPa temperature are also consistently greater in LARGE than in SWOBS (also 4%–13%). However, when the smaller Rs is used in experiment SMALL, rms errors for the 500-hPa winds are decreased only slightly for the same period (1%–3%), and there is no significant trend in the 500-hPa temperature. The statistical difference between wind errors in experiments SMALL and SWOBS is certainly not significant in this case.

7. Conclusions

The purpose of this study has been to investigate the utility of case-specific anisotropic background gradient information to improve the skill of a data assimilation and subsequent forecast. The study is based on the hypothesis that the background error distributions are more strongly correlated in directions perpendicular to the gradient of the variable field than they are parallel to the gradient. Therefore, the influence of the innovation vector must become strongly anisotropic where significant airmass boundaries are encountered. Many data assimilation systems ignore case-specific, flow-dependent information, while relying on error covariance matrices based on a large number of cases. Assimilation schemes using prescribed background error covariance matrices (e.g., OI) are optimal in that they minimize the analysis error in a least squares sense. Nevertheless, they may neglect important time-varying, case-specific, flow-dependent information. It is important to emphasize that standard 3DVAR and 4DVAR methods applied to operational, real-data numerical forecasting do not evolve the background error covariance in space and time during the assimilation period as is possible when using a Kalman filter approach (e.g., Derber and Bouttier 1999; Rabier et al. 2000; Klinker et al. 2000). Specification of time-varying, flow-dependent background error covariances is a challenge for all current operational data assimilation systems.

To demonstrate the potential importance of case-specific, flow-dependent error covariance information, a gradient-dependent feature was added to a Newtonian nudging scheme designed for the PSU–NCAR Mesoscale Model. Structure-dependent weighting for observations, or SWOBS, was used to dynamically initialize an OSSE. The OSSE was developed for a strong dynamics case having large horizontal gradients and rapid baroclinic development. This type of synoptic scenario is likely to gain the most benefit from a case-adaptive technique such as SWOBS. The SWOBS technique was applied using the wind and thermal fields of the intentionally degraded model background.

As anticipated, the structure-dependent weighting scheme affected the data assimilation specifically in regions where the gradients in wind and temperature were strongest. Near the strong baroclinic short wave in the OSSE, 500-hPa height errors were reduced by nearly 12 m, or about 40%, following the dynamic initialization. The SWOBS influence was spatially focused in the horizontal, but extended through most of the atmospheric column and affected both winds and temperatures. The relative magnitudes of the correlation coefficients for mass and momentum fields showed a consistent improvement with SWOBS across the simulation domain through the dynamic-initialization period and the subsequent forecast. Error reductions following the dynamic initializations resulted in moderately better predictions during the first 12 h of the forecasts, but the effect tended to diminish with time, as expected. In the OSSE case examined here, no significant statistical impact was found in the precipitation fields when structure-dependent weighting was used.

Despite the measures taken to make the OSSEs as realistic as possible, we recognize that OSSEs can yield results that are too optimistic. In this case, both of the experiments with circular and noncircular regions of influence used the same OSSE fields, so neither experiment should have had an advantage. However, the value of structure-dependent weighting techniques for general use in dynamic initialization cannot be assessed adequately with the limited tests performed in this heuristic study. Although these experiments represent only a preliminary demonstration of feasibility, they do indicate that case-specific and flow-dependent information drawn from the background fields can be valuable for improving the meteorological simulation. Structure-dependent weighting techniques may have particular value for data assimilation in situations where there are distinct boundaries or discontinuities in the flow (e.g., coastal zone) and circular influence functions are least appropriate. Finally, we note that since the reduction of error when using SWOBS was greatest through the dynamic initialization period, such techniques may have merit in research applications where FDDA is used throughout the simulation period (e.g., air quality modeling).

Acknowledgments

This research was supported by the National Science Foundation under Grant ATM-9116176 and by the U.S. Environmental Protection Agency under Contract CR-818974-01. The first author is currently supported by EPA Interagency Agreement DW1393834 with NOAA. This manuscript has been subjected to EPA and NOAA reviews and approved for publication. Mention of trade names or commercial products does not constitute endorsement or recommendation for use. Computer resources were supplied by the Earth System Science Center at The Pennsylvania State University and by the National Center for Atmospheric Research, Boulder, Colorado, which is supported by the National Science Foundation. The authors are grateful to Brian Etherton and Craig Bishop for useful discussions regarding the Kalman filter. The authors also acknowledge the anonymous reviewers whose comments and suggestions served to strengthen this manuscript.

REFERENCES

  • Andrews, P., 1997: The development of an operational variational data analysis scheme at the UKMO. NWP On-Line Scientific Note No. 5, Meteorological Office, Bracknell, Berkshire, United Kingdom. [Available online at http://www.met-office.gov.uk/sec5/NWP/NWP_ScienceNotes/No5/No5.html.].

  • Anthes, R. A., 1977: A cumulus parameterization scheme utilizing a one-dimensional cloud model. Mon. Wea. Rev.,105, 270–286.

  • ——, and T. T. Warner, 1978: Development of hydrodynamic models suitable for air pollution and other mesometeorological studies. Mon. Wea. Rev.,106, 1045–1078.

  • ——, E.-Y. Hsie, and Y.-H. Kuo, 1987: Description of the Penn State/NCAR mesoscale model version 4 (MM4). NCAR Tech. Note NCAR/TN-282+STR, 66 pp. [Available from National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307.].

  • ——, Y.-H. Kuo, E.-Y. Hsie, S. Low-Nam, and T. W. Bettge, 1989: Estimation of skill and uncertainty in regional numerical models. Quart. J. Roy. Meteor. Soc.,115, 763–806.

  • Arakawa, A., and V. R. Lamb, 1977: Computational design of the basic dynamical processes of the UCLA general circulation model. Methods in Computational Physics, B. Adler et al., Eds., Vol. 17, Academic Press, 173–265.

  • Arnold, C. P., Jr., and C. H. Dey, 1986: Observing-systems simulation experiments: Past, present, and future. Bull. Amer. Meteor. Soc.,67, 687–695.

  • Atlas, R., 1997: Atmospheric observations and experiments to assess their usefulness in data assimilation. J. Meteor. Soc. Japan,75, 111–130.

  • Benjamin, S. G., and N. L. Seaman, 1985: A simple scheme for objective analyses in curved flow. Mon. Wea. Rev.,113, 1184–1198.

  • ——, K. J. Brundage, and L. L. Morone, 1994: The Rapid Update Cycle. Part I: Analysis/model description. Technical Procedures Bull. 416, NOAA/NWS, 16 pp. [Available from National Weather Service, Office of Meteorology, 1325 East–West Highway, Silver Spring, MD 20910.].

  • ——, J. M. Brown, K. J. Brundage, B. E. Schwartz, T. G. Smirnova, and T. L. Smith, 1998: The operational RUC-2. Preprints, 16th Conf. on Weather Analysis and Forecasting, Phoenix, AZ, Amer. Meteor. Soc., 249–252.

  • Burgers, G., P. J. van Leeuwen, and G. Evensen, 1998: Analysis scheme in the ensemble Kalman filter. Mon. Wea. Rev.,126, 1719–1724.

  • Charney, J., M. Halem, and R. Jastrow, 1969: Use of incomplete historical data to infer the present state of the atmosphere. J. Atmos. Sci.,26, 1160–1163.

  • Cohn, S. E., N. S. Sivakumaran, and R. Todling, 1994: A fixed-lag Kalman smoother for retrospective data assimilation. Mon. Wea. Rev.,122, 2838–2867.

  • Courtier, P., and Coauthors, 1998: The ECMWF implementation of three-dimensional variational assimilation (3D-Var). I: Formulation. Quart. J. Roy. Meteor. Soc.,124, 1783–1807.

  • Cressman, G., 1959: An operational objective analysis system. Mon. Wea. Rev.,87, 367–374.

  • Cullen, M. J. P., 1993: The unified forecast/climate model. Meteor. Mag.,122, 81–94.

  • Daley, R., 1991: Atmospheric Data Analysis. Cambridge University Press, 457 pp.

  • Derber, J., and F. Bouttier, 1999: A reformulation of the background error covariance in the ECMWF global data assimilation system. Tellus,51A, 195–221.

  • Dévényi, D., and T. W. Schlatter, 1994: Statistical properties of three-hour prediction “errors” derived from the Mesoscale Analysis and Prediction System. Mon. Wea. Rev.,122, 1263–1280.

  • ——, and S. G. Benjamin, 1998: Application of a three-dimensional variational analysis in RUC-2. Preprints, 12th Conf. on Numerical Weather Prediction, Phoenix, AZ, Amer. Meteor. Soc., 37–40.

  • Dudhia, J., 1989: Numerical study of convection observed during the winter monsoon experiment using a mesoscale two-dimensional model. J. Atmos. Sci.,46, 3077–3107.

  • Errico, R. M., 1999: Meeting summary: Workshop on assimilation of satellite data. Bull. Amer. Meteor. Soc.,80, 463–471.

  • Evensen, G., 1997: Advanced data assimilation for strongly nonlinear dynamics. Mon. Wea. Rev.,125, 1342–1354.

  • Fast, J. D., 1995: Mesoscale modeling and four-dimensional data assimilation in areas of highly complex terrain. J. Appl. Meteor.,34, 2762–2782.

  • Fischer, C., A. Joly, and F. Lalaurette, 1998: Error growth and Kalman filtering within an idealized baroclinic flow. Tellus,50A, 596–615.

  • Gandin, L. S., 1963: Objective Analysis of Meteorological Fields. Gidrometeorolgicheskoe Izdatel’stvo, Leningrad. [Translated from Russian, 1965, Israel Program for Scientific Translations, 242 pp.].

  • Goodge, G. W., Ed., 1992: Storm data and unusual weather phenomena. Storm Data, Vol. 34, No. 1, 54 pp. [Available from National Climatic Data Center, Federal Building, 37 Battery Park Avenue, Asheville, NC 28801-2733.].

  • Grell, G. A., J. Dudhia, and D. R. Stauffer, 1994: A description of the fifth-generation Penn State/NCAR mesoscale model (MM5). NCAR Tech. Note NCAR/TN-398+STR, 138 pp. [Available from National Center for Atmospheric Research, P.O. Box 3000, Boulder, CO 80307.].

  • Guo, Y.-R., Y.-H. Kuo, J. Dudhia, and D. Parsons, 2000: Four-dimensional variational data assimilation of heterogeneous mesoscale observations for a strong convective case. Mon. Wea. Rev.,128, 619–643.

  • Harms, D. E., S. Raman, and R. V. Madala, 1992: An examination of four-dimensional data-assimilation techniques for numerical weather prediction. Bull. Amer. Meteor. Soc.,73, 425–440.

  • Hoke, J. E., and R. A. Anthes, 1976: The initialization of numerical models by a dynamic-initialization technique. Mon. Wea. Rev.,104, 1551–1556.

  • Houtekamer, P. L., and H. L. Mitchell, 1998: Data assimilation using an ensemble Kalman filter technique. Mon. Wea. Rev.,126, 796–811.

  • Ide, K., P. Courtier, M. Ghil, and A. C. Lorenc, 1997: Unified notation for data assimilation: Operational, sequential and variational. J. Meteor. Soc. Japan.,75, 181–189.

  • Kain, J. S., and J. M. Fritsch, 1990: A one-dimensional entraining/detraining plume model and its application in convective parameterization. J. Atmos. Sci.,47, 2784–2802.

  • ——, and ——, 1993: Convective parameterization for mesoscale models: The Kain–Fritsch scheme. The Representation of Cumulus Convection in Numerical Models, Meteor. Monogr., No. 46, Amer. Meteor. Soc., 165–170.

  • Kalnay, E., and Coauthors, 1997: Data assimilation in the ocean and in the atmosphere: What should be next? J. Meteor. Soc. Japan,75, 489–496.

  • Klinker, E., F. Rabier, G. Kelly, and J.-F. Mahfouf, 2000: The ECMWF operational implementation of four-dimensional variational assimilation. III: Experimental results and diagnostics with operational configuration. Quart. J. Roy. Meteor. Soc.,126, 1191–1215.

  • Kuo, Y.-H., and Y.-R. Guo, 1989: Dynamic initialization using observations from a hypothetical network of profilers. Mon. Wea. Rev.,117, 1975–1998.

  • Lanzinger, A., and R. Steinacker, 1990: A fine mesh analysis scheme designed for mountainous terrain. Meteor. Atmos. Phys.,43, 213–219.

  • LeDimet, F.-X., and O. Talagrand, 1986: Variational algorithms for analysis and assimilation of meteorological observations. Tellus,38A, 97–110.

  • LeMarshall, J. F., L. M. Leslie, and C. Spinoso, 1997: The generation and assimilation of cloud-drift winds in numerical weather prediction. J. Meteor. Soc. Japan,75, 383–393.

  • Leslie, L. M., J. F. LeMarshall, R. P. Morison, C. Spinoso, R. J. Purser, N. Pescod, and R. Seecamp, 1998: Improved hurricane track forecasting from the continuous assimilation of high quality satellite wind data. Mon. Wea. Rev.,126, 1248–1257.

  • Lewis, J. M., and J. C. Derber, 1985: The use of adjoint equations to solve a variational adjustment problem with advective constraints. Tellus,37A, 309–322.

  • Lönnberg, P., and A. Hollingsworth, 1986: The statistical structure of short-range forecast errors as determined from radiosonde data. II: The covariance of height and wind errors. Tellus,38A, 137–161.

  • Lorenc, A. C., 1986: Analysis methods for numerical weather prediction. Quart. J. Roy. Meteor. Soc.,112, 1177–1194.

  • ——, R. S. Bell, and B. MacPherson, 1991: The Meteorological Office analysis correction data assimilation scheme. Quart. J. Roy. Meteor. Soc.,117, 59–89.

  • McPherson, R. D., 1975: Progress, problems, and prospects in meteorological data assimilation. Bull. Amer. Meteor. Soc.,56, 1154–1166.

  • Michelson, S. A., and N. L. Seaman, 2000: Assimilation ofNEXRAD-VAD winds in summertime meteorological simulations over the northeastern United States. J. Appl. Meteor.,39, 367–383.

  • Miller, P. A., and S. G. Benjamin, 1992: A system for the hourly assimilation of surface observations in mountainous and flat terrain. Mon. Wea. Rev.,120, 2342–2359.

  • Parrish, D. F., and J. C. Derber, 1992: The National Meteorological Center’s spectral statistical interpolation analysis system. Mon. Wea. Rev.,120, 1747–1763.

  • ——, ——, R. J. Purser, W.-S. Wu, and Z.-X. Pu, 1997: The NCEP global analysis system: Recent improvements and future plans. J. Meteor. Soc. Japan,75, 359–365.

  • Parsons, D. B., and J. Dudhia, 1997: Observing system simulation experiments and objective analysis tests in support of the goals of the Atmospheric Radiation Measurement Program. Mon. Wea. Rev.,125, 2353–2381.

  • Pu, Z.-X., E. Kalnay, J. C. Derber, and J. G. Sela, 1997a: Using forecast sensitivity patterns to improve future forecast skill. Quart. J. Roy. Meteor. Soc.,123, 1035–1053.

  • ——, ——, D. Parrish, W. Wu, and Z. Toth, 1997b: The use of bred vectors in the NCEP global 3D variational analysis system. Wea. Forecasting,12, 689–695.

  • Rabier, F., H. Järvinen, E. Klinker, J.-F. Mahfouf, and A. Simmons, 2000: The ECMWF operational implementation of four-dimensional variational assimilation. I: Experimental results with simplified physics. Quart. J. Roy. Meteor. Soc.,126, 1143–1170.

  • Rogers, E., D. G. Deaven, and G. J. DiMego, 1995: The regional analysis system for the operational “early” Eta Model: Original 80-km configuration and recent changes. Wea. Forecasting,10, 810–825.

  • Sasaki, Y., 1970: Some basic formulations in numerical variational analysis. Mon. Wea. Rev.,98, 875–883.

  • Schraff, C. H., 1997: Mesoscale data assimilation and prediction of low stratus in the alpine region. Meteor. Atmos. Phys.,64, 21–50.

  • Seaman, N. L., 2000: Meteorological modeling for air-quality assessments. Atmos. Environ.,34, 2231–2259.

  • ——, D. R. Stauffer, and A. M. Lario-Gibbs, 1995: A multiscale four-dimensional data assimilation system applied in the San Joaquin Valley during SARMAP. Part I: Modeling design and basic performance characteristics. J. Appl. Meteor.,34, 1739–1761.

  • Shapiro, R., 1970: Smoothing, filtering and boundary effects. Rev. Geophys. Space Phy.,8, 359–387.

  • Stauffer, D. R., and N. L. Seaman, 1990: Use of four-dimensional data assimilation in a limited-area mesoscale model. Part I: Experiments with synoptic-scale data. Mon. Wea. Rev.,118, 1250–1277.

  • ——, and J.-W. Bao, 1993: Optimal determination of nudging coefficients using the adjoint technique. Tellus,45A, 358–369.

  • ——, and N. L. Seaman, 1994: Multiscale four-dimensional data assimilation. J. Appl. Meteor.,33, 416–434.

  • ——, ——, and F. S. Binkowski, 1991: Use of four-dimensional data assimilation in a limited-area mesoscale model. Part II: Effects of data assimilation within the planetary boundary layer. Mon. Wea. Rev.,119, 734–754.

  • ——, ——, T. T. Warner, and A. M. Lario, 1993: Application of an atmospheric simulation model to diagnose air-pollution transport in the Grand Canyon region of Arizona. Chem. Eng. Comm.,121, 9–26.

  • ——, ——, G. K. Hunter, S. M. Leidner, A. Lario-Gibbs, and S. Tanrikulu, 2000: A field-coherence technique for meteorological field-program design for air quality studies. Part I: description and interpolation. J. Appl. Meteor.,39, 297–316.

  • Tanrikulu, S., D. R. Stauffer, N. L. Seaman, and A. J. Ranzieri, 2000:A field-coherence technique for meteorological field-program design for air quality studies. Part II: evaluation in the San Joaquin Valley. J. Appl. Meteor.,39, 317–334.

  • Todling, R., S. E. Cohn, and N. S. Sivakumaran, 1998: Suboptimal schemes for retrospective data assimilation based on the fixed-lag Kalman filter. Mon. Wea. Rev.,126, 2274–2286.

  • Wang, W., and N. L. Seaman, 1997: A comparison study of convective parameterization schemes in a mesoscale model. Mon. Wea. Rev.,125, 252–278.

  • Zhang, D.-L., and R. A. Anthes, 1982: A high-resolution model of the planetary boundary layer—Sensitivity tests and comparisons with SESAME-79 data. J. Appl. Meteor.,21, 1594–1609.

  • Zou, X., I. M. Navon, and F. X. LeDimet, 1992: An optimal nudging data assimilation scheme using parameter estimation. Quart. J. Roy. Meteor. Soc.,118, 1163–1186.

  • Zupanski, D., 1997: A general weak constraint applicable to operational 4DVAR data assimilation systems. Mon. Wea. Rev.,125, 2274–2292.

Fig. 1.
Fig. 1.

SWOBS region of influence at two selected observation sites, A (Great Falls, MT) and B (Little Rock, AR). The background field is an analysis of 850-hPa temperature, valid 0000 UTC 14 Jan 1992. Contour interval is 2°C. The radius of influence R is 750 km, and the threshold ΔT is 20°C.

Citation: Monthly Weather Review 129, 4; 10.1175/1520-0493(2001)129<0766:AHSOTI>2.0.CO;2

Fig. 2.
Fig. 2.

Experimental domains for the OSSE. (a) The 30-km, 169 × 181 domain for the truth run. (b) The 90-km, 43 × 55 domain for the OSSE experiments. (c) The verification domain. Vertical cross sections are analyzed along A–B.

Citation: Monthly Weather Review 129, 4; 10.1175/1520-0493(2001)129<0766:AHSOTI>2.0.CO;2

Fig. 3.
Fig. 3.

Vertical comparison of averaged rms errors in the OSSE initial conditions to the NGM winter averages. (top) Rms error for vector wind analyses and (bottom) rms error for temperature analyses. The NGM winter averages are plotted as solid boxes, the OSSE initial conditions without intentional degradation are plotted as open circles, and the OSSE initial conditions with a 180-km phase lag and additional smoothing are plotted as open triangles.

Citation: Monthly Weather Review 129, 4; 10.1175/1520-0493(2001)129<0766:AHSOTI>2.0.CO;2

Fig. 4.
Fig. 4.

The NCEP 500-hPa analysis of height and temperature. Height is in dam, temperature and dewpoint depression in °C, height tendency in dam (12 h)−1, and wind speed in kt. Contour interval for height contours is 60 m. Contour interval for isotherms is 5°C. Valid (a) 0000 UTC 13 Jan, (b) 0000 UTC 14 Jan, and (c) 0000 UTC 15 Jan 1992.

Citation: Monthly Weather Review 129, 4; 10.1175/1520-0493(2001)129<0766:AHSOTI>2.0.CO;2

Fig. 5.
Fig. 5.

The 500-hPa height “analysis” from the OSSE truth run. Contour interval is 60 m. Valid (a) 0000 UTC 14 Jan and (b) 0000 UTC 15 Jan 1992.

Citation: Monthly Weather Review 129, 4; 10.1175/1520-0493(2001)129<0766:AHSOTI>2.0.CO;2

Fig. 6.
Fig. 6.

Hourly rms errors contrasting expts CTRL, CIRC, and SWOBS. The lines connecting open circles represent CTRL forecast beginning at t = 0 h. The diamonds represent expt CIRC, and the cross hatches represent expt SWOBS. The dynamic initialization begins at t = −12 h and the subsequent forecast begins at t = 0 h. The (a) 500-hPa vector wind, (b) 300-hPa vector wind, (c) 850-hPa temperature, (d) 500-hPa temperature, and (e) 700-hPa mixing ratio.

Citation: Monthly Weather Review 129, 4; 10.1175/1520-0493(2001)129<0766:AHSOTI>2.0.CO;2

Fig. 7.
Fig. 7.

The 500-hPa height errors following dynamic initialization, valid 0000 UTC 14 Jan 1992 (t = 0 h). Contour interval is 10 m. Shading represents error in excess of 20 m. (a) Experiment CIRC. (b) Experiment SWOBS.

Citation: Monthly Weather Review 129, 4; 10.1175/1520-0493(2001)129<0766:AHSOTI>2.0.CO;2

Fig. 8.
Fig. 8.

South–north cross section of potential temperature errors near the upper-low center following dynamic initialization, valid 0000 UTC 14 Jan 1992 (t = 0 h). Contour interval is 0.5 K. Shading represents error with magnitude in excess of 2 K. Vertical coordinate is pressure in hPa. Refer to Fig. 2 for location of cross section A–B. (a) Experiment CIRC. (b) Experiment SWOBS.

Citation: Monthly Weather Review 129, 4; 10.1175/1520-0493(2001)129<0766:AHSOTI>2.0.CO;2

Fig. 9.
Fig. 9.

South–north cross section of wind speed errors near the upper-low center following dynamic initialization, valid 0000 UTC 14 Jan 1992 (t = 0 h). Contour interval is 2 m s−1. Shading represents error with magnitude in excess of 4 m s−1. Vertical coordinate is pressure in hPa. Refer to Fig. 2 for location of cross section A–B. (a) Experiment CIRC. (b) Experiment SWOBS.

Citation: Monthly Weather Review 129, 4; 10.1175/1520-0493(2001)129<0766:AHSOTI>2.0.CO;2

Fig. 10.
Fig. 10.

Hourly correlation coefficient (r2) statistics contrasting expts CIRC and SWOBS verified against the truth run. The open circles represent expt CIRC, and the diamonds represent expt SWOBS. The dynamic initialization begins at t = −12 h and the subsequent forecast begins at t = 0 h. The (a) 500-hPa wind speed on full domain, (b) 500-hPa wind speed on verification domain, (c) 500-hPa temperature on full domain, and (d) 500-hPa temperature on verification domain.

Citation: Monthly Weather Review 129, 4; 10.1175/1520-0493(2001)129<0766:AHSOTI>2.0.CO;2

Table 1.

Summary of nudging characteristics in SWOBS experiments.

Table 1.
Table 2.

Comparison of precipitation statistics at the 0.025-cm (0.01 in.) precipitation threshold for CTRL, CIRC, and SWOBS.

Table 2.
Save