Search Results

You are looking at 1 - 10 of 12 items for

  • Author or Editor: Rolf H. Langland x
  • Refine by Access: All Content x
Clear All Modify Search
Rolf H. Langland

Abstract

An adjoint-based method is used to calculate the impact of observation data on a measure of short-range forecast error in the Navy Operational Global Atmospheric Prediction System (NOGAPS) during November and December 2003. The evaluated observations include all regular satellite and in situ data assimilated in the Naval Research Laboratory (NRL) Atmospheric Variational Data Assimilation System (NAVDAS) at 1800 UTC, and also targeted dropsonde profiles provided by the North Atlantic Observing-System Research and Predictability Experiment (THORPEX) Regional Campaign (NA-TReC) field program. Commerical aircraft observations account for 46% of the total forecast error reduction by observations in the NA-TReC domain, which includes the North Atlantic and adjacent regions of North America and Europe. Targeted dropsonde data have high impact per observation, but the impact of all dropsonde data is less than 2% of the total during the 2-month study period. Eight of 12 targeted dropsonde cases reduce forecast error. The percent of total impact for other observations assimilated at 1800 UTC in the NA-TReC domain is as follows: Advanced Microwave Sounding Unit-A (AMSU-A) radiances (16%), satellite winds (14%), land surface data (9%), radiosondes (8%), and ship-surface data (5%). If observations over the entire global domain are evaluated, the largest impact of data provided at 1800 UTC during November and December 2003 is provided by AMSU-A radiance data (48% of total).

Full access
Rolf H. Langland
and
Ronald M. Errico

Abstract

No abstract available

Full access
Rolf H. Langland
and
Chi-Sann Liou

Abstract

An E–ε parameterization of subgrid-scale vertical turbulent mixing has been installed in NORAPS (Navy operational Regional Atmospheric Prediction System). The 1.5-order parameterization uses full prognostic equations for turbulence kinetic energy E and dissipation ε with no mixing length l assumption. A stable numerical method has been developed to integrate the two prognostic equations; this method has time and memory requirements that are similar to first-order K-theory turbulence parameterization and avoids numerical instabilities reported with El (Mellor–Yamada level 2.5) schemes. The E–ε parameterization produces a more active mixed layer, compared to a first-order K-theory scheme. Improvements are noted in forecasts of mixed-layer depth and near-surface wind speed, with reduction or elimination of spurious noise in the predicted fields of temperature and wind that were related to deficiencies of the first-order K-theory parameterization. In a numerical simulation of the ERICA (Experiment on Rapidly Intensifying Storms over the Atlantic) IOP 5A storm, the E–ε parameterization provides an improved forecast of cyclone central pressure. The better cyclone forecast results primarily from more accurate prediction of wind speed near the surface and in the upper troposphere where first-order K theory may produce unrealistic vertical mixing of momentum and temperature.

Full access
Dacian N. Daescu
and
Rolf H. Langland

Abstract

The forecast sensitivity to observations (FSO) is embedded into a new optimization framework for improving the observation performance in atmospheric data assimilation. Key ingredients are introduced as follows: the innovation-weight parameterization of the analysis equation, the FSO-based evaluation of the forecast error gradient to parameters, a line search approach to optimization, and an efficient mechanism for step length specification. This methodology is tested in preliminary numerical experiments with the Naval Research Laboratory Atmospheric Variational Data Assimilation System-Accelerated Representer (NAVDAS-AR) and the U.S. Navy’s Global Environmental Model (NAVGEM) at a T425L60 resolution. The experimental setup relies on a verification state produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) to estimate the analysis and short-range forecast errors. Parameter tuning is implemented in a training stage valid for 1–14 April 2018 and aimed at improving the use of assimilated observations in reducing the initial-condition errors. Assessment is carried out for 15 April–31 May 2018 to investigate the performance of the weighted assimilation system in reducing the errors in analyses and 24-h model forecasts. In average, as compared with the control run and verified against the ECMWF analyses, the weighted approach provided 17.4% reduction in analysis errors and 3.1% reduction in 24-h forecast errors, measured in a dry total energy norm. Observation impacts are calculated to assess the use of various observation types in reducing the analysis and forecast errors. In particular, assimilation of satellite wind data is significantly improved through the innovation-weighting procedure.

Significance Statement

A new methodology is introduced to improve the information content of observations in numerical weather prediction (NWP). The computational procedure relies on observation sensitivity tools developed at all major NWP centers, and therefore, it appeals to a large audience for implementation and testing in practical applications. Our approach retains all available observations and provides a judicious optimization-based guidance to identify system deficiencies and improve the weighting assigned to various observing system components. The practical ability to implement this methodology is demonstrated in a computational environment that features all elements necessary for NWP applications. Preliminary results show that proper specification of the innovation weights can significantly improve the observation performance in reducing both the analysis errors and short-range forecast errors.

Restricted access
Rolf H. Langland
,
Christopher Velden
,
Patricia M. Pauley
, and
Howard Berger

Abstract

The impacts of special Geostationary Operational Environmental Satellite (GOES) rapid-scan (RS) wind observations on numerical model 24–120-h track forecasts of Hurricane Katrina are examined in a series of data assimilation and forecast experiments. The RS wind vectors are derived from geostationary satellites by tracking cloud motions through successive 5-min images. In these experiments, RS wind observations are added over the area 15°–60°N, 60°–110°W, and they supplement the observations used in operational forecasts. The inclusion of RS wind observations reduces errors in numerical forecasts of the Katrina landfall position at 1200 UTC 29 August 2005 by an average of 12% compared to control cases that include “targeted” dropsonde observations in the Katrina environment. The largest average improvements are made to the 84- to 120-h Katrina track forecasts, rather than to the short-range track forecasts. These results suggest that RS wind observations can potentially be used in future cases to improve track forecasts of tropical cyclones.

Full access
Rolf H. Langland
,
Paul M. Tag
, and
Robert W. Fett

Abstract

Satellite imagery from 18 April 1978 suggests the presence of a semicircular zone of calm or new-calm seas in Monterey Bay, California. It is hypothesized that sea breeze circulations account for the calm zone in the bay, although a lack of in situ surface and upper air observations prevents direct verification of this theory. A three-dimensional numerical model of the marine atmospheric boundary layer is used to simulate the development of the low-level wind field on the day in question, under sea breeze conditions. The model produces a zone of winds speeds under 1.0 m s−1 over the center of the bay, near the time of the satellite image. These model results suggest that a sea breeze circulation may have accounted for the zone of very light winds and calm sea.

Full access
Rolf H. Langland
,
Melvyn A. Shapiro
, and
Ronald Gelaro

Abstract

Short- and medium-range (24–96-h) forecasts of the January 2000 U.S. east coast cyclone and associated snowstorm are examined using the U.S. Navy global forecast model and adjoint system. Attention is given to errors on the synoptic scale, including forecast position and central pressure of the cyclone at the verification time of 1200 UTC 25 January 2000. There is a substantial loss of predictive skill in the 72- and 96-h forecasts, while the 24- and 48-h forecasts capture the synoptic-scale features of the cyclone development with moderate errors. Sensitivity information from the adjoint model suggests that the initial conditions for the 72-h forecast starting at 1200 UTC 22 January 2000 contained relatively small, but critical, errors in upper-air wind and temperature over a large upstream area, including part of the eastern Pacific and “well observed” areas of western and central North America. The rapid growth of these initial errors in a highly unstable flow regime (large singular-vector growth factors) is the most likely cause of the large errors that developed in operational short- and medium-range forecasts of the snowstorm. The large extent of the upstream sensitive area in this case would appear to make “targeting” a small set of new observations an impractical method to improve forecast skill. A diagnostic correction (derived from adjoint sensitivity information) of a part of the initial condition error in the 72-h forecast reduces the forecast error norm by 75% and improves a 1860-km error in cyclone position to a 105-km error. This demonstrates that the model is capable of making a skillful forecast starting from an initial state that is plausible and not far from the original initial conditions. It is also shown that forecast errors in this case propagate at speeds that are greater than those of the synoptic-scale trough and ridge features of the cyclone.

Full access
Ronald Gelaro
,
Rolf H. Langland
,
Simon Pellerin
, and
Ricardo Todling

Abstract

An experiment is being conducted to directly compare the impact of all assimilated observations on short-range forecast errors in different forecast systems using an adjoint-based technique. The technique allows detailed comparison of observation impacts in terms of data type, location, satellite sounding channel, or other relevant attributes. This paper describes results for a “baseline” set of observations assimilated by three forecast systems for the month of January 2007. Despite differences in the assimilation algorithms and forecast models, the impacts of the major observation types are similar in each forecast system in a global sense. However, regional details and other aspects of the results can differ substantially. Large forecast error reductions are provided by satellite radiances, geostationary satellite winds, radiosondes, and commercial aircraft. Other observation types provide smaller impacts individually, but their combined impact is significant. Only a small majority of the total number of observations assimilated actually improves the forecast, and most of the improvement comes from a large number of observations that have relatively small individual impacts. Accounting for this behavior may be especially important when considering strategies for deploying adaptive (or “targeted”) components of the observing system.

Full access
Sergey Frolov
,
Douglas R. Allen
,
Craig H. Bishop
,
Rolf Langland
,
Karl W. Hoppel
, and
David D. Kuhl

Abstract

The local ensemble tangent linear model (LETLM) provides an alternative method for creating the tangent linear model (TLM) and adjoint of a nonlinear model that promises to be easier to maintain and more computationally scalable than earlier methods. In this paper, we compare the ability of the LETLM to predict the difference between two nonlinear trajectories of the Navy’s global weather prediction model at low resolution (2.5° at the equator) with that of the TLM currently used in the Navy’s four-dimensional variational (4DVar) data assimilation scheme. When compared to the pair of nonlinear trajectories, the traditional TLM and the LETLM have improved skill relative to persistence everywhere in the atmosphere, except for temperature in the planetary boundary layer. In addition, the LETLM was, on average, more accurate than the traditional TLM (error reductions of about 20% in the troposphere and 10% overall). Sensitivity studies showed that the LETLM was most sensitive to the number of ensemble members, with the performance gradually improving with increased ensemble size up to the maximum size attempted (400). Inclusion of physics in the LETLM ensemble leads to a significantly improved representation of the boundary layer winds (error reductions of up to 50%), in addition to improved winds and temperature in the free troposphere and in the upper stratosphere/lower mesosphere. The computational cost of the LETLM was dominated by the cost of ensemble propagation. However, the LETLM can be precomputed before the 4DVar data assimilation algorithm is executed, leading to a significant computational advantage.

Full access
Douglas R. Allen
,
Sergey Frolov
,
Rolf Langland
,
Craig H. Bishop
,
Karl W. Hoppel
,
David D. Kuhl
, and
Max Yaremchuk

Abstract

An ensemble-based linearized forecast model has been developed for data assimilation applications for numerical weather prediction. Previous studies applied this local ensemble tangent linear model (LETLM) to various models, from simple one-dimensional models to a low-resolution (~2.5°) version of the Navy Global Environmental Model (NAVGEM) atmospheric forecast model. This paper applies the LETLM to NAVGEM at higher resolution (~1°), which required overcoming challenges including 1) balancing the computational stencil size with the ensemble size, and 2) propagating fast-moving gravity modes in the upper atmosphere. The first challenge is addressed by introducing a modified local influence volume, introducing computations on a thin grid, and using smaller time steps. The second challenge is addressed by applying nonlinear normal mode initialization, which damps spurious fast-moving modes and improves the LETLM errors above ~100 hPa. Compared to a semi-Lagrangian tangent linear model (TLM), the LETLM has superior skill in the lower troposphere (below 700 hPa), which is attributed to better representation of moist physics in the LETLM. The LETLM skill slightly lags in the upper troposphere and stratosphere (700–2 hPa), which is attributed to nonlocal aspects of the TLM including spectral operators converting from winds to vorticity and divergence. Several ways forward are suggested, including integrating the LETLM in a hybrid 4D variational solver for a realistic atmosphere, combining a physics LETLM with a conventional TLM for the dynamics, and separating the LETLM into a sequence of local and nonlocal operators.

Free access