Search Results

You are looking at 1 - 5 of 5 items for

  • Author or Editor: P. Lönnberg x
  • Refine by Access: All Content x
Clear All Modify Search
Kamal Puri and P. Lönnberg

Abstract

The sensitivity of analysis of a well-observed tropical cyclone to high-resolution structure functions and modification of the analysis scheme to accept data in the vicinity of the cyclone is studied. It is shown that these changes in the analysis system lead to a much improved location of the cyclone. Additionally, the high-resolution structure functions also lead to an analysed structure in the vertical that is closer to the observed data and so is more realistic. However, the strong wind shears in the vertical are still not satisfactorily analysed.

Full access
A. Hollingsworth and P. Lonnberg

Abstract

No abstract available.

Full access
G. Kelly, E. Andersson, A. Hollingsworth, P. Lönnberg, J. Pailleux, and Z. Zhang

Abstract

Earlier work identified serious errors and biases in the operational temperature and moisture satellite retrievals produced by statistical methods. We show that similar errors and biases are found in the physical retrievals produced operationally since September 1988. We report experiments on quality control algorithms to deal with the errors in the satellite data. The quality control changes resulting from this work were implemented in the European Centre for Medium Range Weather Forecasts (ECMWF) system in January 1989. The performance of the quality control changes in the period after the change has been satisfactory.

Full access
E. Andersson, A. Hollingsworth, G. Kelly, P. Lönnberg, J. Pailleux, and Z. Zhang

Abstract

We report an observing system experiment on satellite sounding data during a 15.5-day period in January–February 1987, using the operational European Centre for Medium Range Weather Forecasts (ECMWF) system as it was in late July 1988. The forecast results show a negative impact of the satellite sounding data (SATEM) in the Northern Hemisphere, and a strong positive impact in the Southern Hemisphere. The model and analysis developments implemented between July 1987 and July 1988 led to forecast improvements whether or not SATEM data were used. Improvements were larger in the NoSATEM context. Consequently, the neutral Northern Hemisphere impact of SATEM data with the 1987 system became a negative impact with the 1988 system. Thus, recent changes in the analysis–forecast system have made the system more sensitive to data, and therefore more vulnerable to bad data. We show that the statistical retrievals have serious errors and biases. The biases are airmass-dependent and so have strong regional variations.

Full access
A. Hollingsworth, D. B. Shaw, P. Lönnberg, L. Illari, K. Arpe, and A. J. Simmons

Abstract

The purpose of this paper is to demonstrate the ability of a modern data assimilation system to provide long-term diagnostic facilities to monitor the performance of the observational network. Operational data assimilation systems use short-range forecasts to provide the background, or first-guess, field for the analysis. We make a detailed study of the apparent or perceived error of these forecasts when they are verified against radiosondes. On the assumption that the observational error of the radiosondes is horizontally uncorrelated, the perceived forecast error can be partitioned into prediction error, which is horizontally correlated, and observation error, which is not. The calculations show that in areas where there is adequate radiosonde coverage, the 6-hour prediction error is comparable with the observation error.

This statement is discussed from a number of viewpoints. We demonstrate in the Northern Hemisphere midlatitudes, for example, that the forecasts account for most of the evolution of the atmospheric state from one analysis to the next, so that the analysis algorithm needs to make only a small correction to an accurate first-guess field; the situation is rather different in the Southern Hemisphere. If the doubling time for small errors is two days, then analysis error will amplify by less than 10% in 6 hours.

This being the case, the statistics of the forecast/observation differences have a simple statistical structure. Large variations of the statistics from station to station, or large biases, are indicative of problems in the data or in the assimilation system. Case studies demonstrate the ability of simple statistical tools to identify systematically erroneous radiosonde wind data in data sparse, as well as in data rich areas, errors which would have been difficult to detect in any other way. The statistical tools are equally effective in diagnosing the performance of the assimilation system.

The results suggest that it is possible to provide regular feedback on the quality of observations of winds and heights to operators of radiosonde networks and other observational systems. This capability has become available over the last decade through improvements in the techniques of numerical weather analysis and prediction.

Full access