Abstract

The role of observations in reducing 24-h forecast errors is evaluated using the adjoint-based forecast sensitivity to observations (FSO) method developed within the Met Office global numerical weather prediction (NWP) system. The impacts of various subsets of observations are compared, with emphasis on space-based observations, particularly those from instruments on board the European Organisation for the Exploitation of Meteorological Satellites Meteorological Operational-A (MetOp-A) platform. Satellite data are found to account for 64% of the short-range global forecast error reduction, with the remaining 36% coming from the assimilation of surface-based observation types. MetOp-A data are measured as having the largest impact of any individual satellite platform (about 25% of the total impact on global forecast error reduction). Their large impact, compared to that of NOAA satellites, is mainly due to MetOp's additional sensors [Infrared Atmospheric Sounding Interferometer (IASI), Global Navigation Satellite System (GNSS) Receiver for Atmospheric Sounding (GRAS), and the Advanced Scatterometer (ASCAT)]. Microwave and hyperspectral infrared sounding techniques are found to give the largest total impacts. However, the GPS radio occultation technique is measured as having the largest mean impact per profile of observations among satellite types. This study demonstrates how the FSO technique can be used to assess the impact of individual satellite data types in NWP. The calculated impacts can be used to guide improvements in the use of currently available data and to contribute to discussions on the evolution of future observing systems.

1. Introduction

The contribution of satellite data to the accuracy of global NWP now exceeds that of surface-based observations. This has been achieved mainly through better usage of satellite data within the data assimilation process (Bouttier and Kelly 2001; Kelly and Thépaut 2007; Gelaro et al. 2010). The Met Office is continually working to expand the range of satellite data assimilated and to improve the impact of currently assimilated data types (e.g., Pavelin et al. 2008; Eyre et al. 2008; Hilton et al. 2009).

The impacts of newly available satellite data or improved assimilation methods are traditionally tested in OSEs (see the  appendix for a glossary of acronyms and terminology used in this paper). These experiments measure the impact of an observing system or assimilation change by comparison of forecasts from the modified system with those from a “control” system that does not contain the change. The control usually emulates the current operational NWP system. In this way, the contribution of each development can be assessed one by one. However, using such methods, it is expensive to assess the relative contributions of each assimilated satellite data type to forecast accuracy at any given time; this would involve an exhaustive set of data-denial OSEs in which each satellite data type is excluded in turn. As a consequence, the relative impact of satellite data types within the Met Office global NWP system has not recently been evaluated systematically in this way. Nevertheless, such evaluations are useful for checking the impact of each satellite data type as the system evolves. Impacts may have been beneficial when the data types were first introduced, but they could have deteriorated over time. In addition, such evaluations can make an important contribution to discussions of the evolution and design of future observation systems. At the present time it is particularly important to understand the impact of data from the MetOp-A satellite in order to guide preparations for the next generation of European polar-orbiting satellites.

To evaluate the relative impact of each observation type, the adjoint-based FSO method developed for the Met Office global NWP system has been used (Lorenc and Marriott 2013). The system estimates the impact on forecast error for each piece of observational information assimilated. All impacts are calculated simultaneously and so the method is efficient. Impacts can be easily aggregated, making the method extremely useful for evaluating the impact of satellite data, which consists of many subtypes. The impact of each of these subtypes cannot be regularly assessed in an affordable manner through the use of OSEs alone. The FSO method, however, is limited to evaluating observation impacts on forecasts typically no longer than 24 hours due to the necessary approximation of the full forecast model by a simplified linear version. OSEs still play an important role in evaluating impacts on longer forecasts.

The purpose of this paper is to document an evaluation of the relative impact of satellite data in the Met Office global NWP system. A brief introduction to the adjoint-based method, a summary of satellite data usage at the Met Office, and a description of the experimental design are given in section 2; results are presented in section 3 and discussed in section 4; and section 5 gives a brief summary of our conclusions.

2. Method

a. The FSO method

This study uses an adjoint-based FSO method similar to that originally developed in Langland and Baker (2004). Our method is subtly different in ways that will be explained in this section. Full details of the Met Office system are documented in Lorenc and Marriott (2013).

We use the Met Office FSO system to measure the impact of observations on global 24-h forecast error. The adjoint method requires that forecast error be represented by a single scalar value. We choose to do this by using a global total energy norm:

 
formula

where represents the “error” in a simplified global forecast state (as given by the difference from a verifying analysis, which is assumed to be independent of that used to initialize the forecast in question); superscript T denotes the transpose operator; and is a diagonal inner-product matrix of energy weights with nonzero elements corresponding to values of horizontal wind, temperature, pressure, and humidity. In this study, weights corresponding to grid points above 150 hPa have been set to zero. The forecast impact is then the change in this total energy as a result of assimilating a batch of observations (usually negative, corresponding to a reduction in forecast error). This is given by

 
formula

where is the error in a simplified forecast state initialized from an analysis and is that initialized from the background state for that analysis. (Both forecast states are valid at time t and are verified against the same analysis.) As (2) is the difference of two squares, it can be written as

 
formula

where the change in error, , is also equivalent to the change in forecast state due to the assimilation of observations. The second term, , is our forecast sensitivity vector, which can be thought of as the finite gradient in an exact linear expression for the impact of on (ignoring the fact that it is, itself, dependent on ). Note that the forecast sensitivity vector is equivalent to the average of the gradients of (1) at the points of the forecast from analysis and the forecast from background. In Langland and Baker (2004), linearization errors are taken account of by a similar averaging of gradients, but at the analysis time rather than the forecast time. By averaging at the forecast time, we take account of the quadratic nature of (1), which is the dominant nonlinearity of the FSO problem (Gelaro et al. 2007), and we are left with a single forecast sensitivity vector to which we need only apply the adjoint forecast model once, as shown in the following.

Our goal is not to express the impact in terms of the change in forecast state , but to express it in terms of observation innovations (the difference of observations from the model background estimate). We note that the change in forecast state is due to the assimilation of observations and can be approximated by the expression

 
formula

where is the background model state; is the vector of observation innovations; is a linearized version of the extended Kalman smoother K, which is applied implicitly during minimization of our incremental 4DVAR scheme; is the simplification operator with generalized inverse ; and is a linearized version of our full forecast model used to carry perturbations forward from time 0 to time t. Nonlinearities in K are weak, making a good approximation. Nonlinearities in , however, are larger and can introduce significant linearization error (Lorenc and Marriott 2013). Other approximations made in (4) are that operates on simplified model states and uses simplified physics schemes.

Substituting (4) into (3), we get

 
formula
 
formula

where (6) is our vector of “finite observation sensitivities” and and are the adjoint forecast model and adjoint data assimilation scheme, respectively. The adjoint forecast model is available as a standard component of our variational data assimilation system. To mitigate against the linearization problems mentioned with reference to (4), is linearized about the forecast trajectory, which is the average of those initialized from background and analysis states (Lorenc and Marriott 2013). While is a line-by-line adjoint of the perturbation forecast model, is applied by utilizing a modified version of the Met Office 4DVAR scheme, making use of existing software to minimize a modified cost function. Therefore, is only the adjoint of when full convergence is reached. Our 4DVAR method allows the observation operators, and hence the Kalman gain, , to be weakly nonlinear. The adjoint is only defined for a linear operator. We choose to linearize about the final analysis (Lorenc and Marriott 2013).

Equation (6) is a vector that contains a sensitivity corresponding to each observation in . An estimate of the contribution to the total impact of the kth observation is given by

 
formula

Note though that, as previously mentioned, the vector of sensitivities contains dependencies on . The application of to observation innovations in (4) and the fact that (4) is present in the forecast sensitivity vector mean that (7) contains cross products with many elements of (i.e., the impact of observation k cannot be uniquely untangled from the total impact). This is a consequence of using (1) at the analysis point, which is nonlinear in , in (3) to obtain an exact expression for the total impact. Gelaro et al. (2007) show that contributions to partial impacts from cross-products with observation innovations outside the set in question are dominated by linear contributions from within the set itself, at least for fairly large sets of observations. We therefore use (7) to calculate the impacts for subsets of observations, making the assumption that the influence of observations external to that set is small and that all subsets are affected in a similar way such that relative impacts are not significantly affected.

b. Satellite data usage

Satellite instruments do not, in general, observe the NWP analysis variables directly; satellite observations are compared with analysis variables via “observation operators” within a data assimilation system. Nevertheless, each satellite data type is affected by, and provides information on, a limited number of analysis variables. The satellite observation types used in this study, and the NWP variables about which these observations contain most information, are listed in Table 1.

Table 1.

Satellite observation types used in this study and affected NWP variables. See the  appendix for explanations of acronyms.

Satellite observation types used in this study and affected NWP variables. See the appendix for explanations of acronyms.
Satellite observation types used in this study and affected NWP variables. See the appendix for explanations of acronyms.

Note that several other satellite data types are also used to initialize other NWP variables—sea surface temperature, sea ice, snow cover, and soil moisture—but not as part of the 4DVAR process. Consequently, the impact of observations important for their analysis will not be measured by the FSO method.

c. Experimental design

The global NWP system used for this experiment is the same as that used operationally at the Met Office from 16 March 2011. The analyses are produced using 4DVAR with a 6-h cycle. The horizontal resolution of the nonlinear forecast model is N320 (about 40 km), with 70 vertical levels up to 60 km. The linear model in 4DVAR includes moist processes and has the same vertical resolution as the nonlinear model with a horizontal resolution of N216 (about 60 km).

Observation impacts give an estimate of the change in 24-h forecast error due to the assimilation of observations. Forecast error is approximated as the difference between the forecast and its own analysis, as described in section 2a, and quantified using a global moist energy norm extending from the surface to 150 hPa. It is important to remember the nature of this error measure when interpreting observation impacts. For example, our assumption that verifying analyses are independent of the analyses being studied is not strictly valid and the relationship between observation biases and subsequent analysis and forecast biases should be considered. Also, while we hope that the 24-h global energy norm is a useful measure of impact, it may not include all aspects of the forecasts or forecast lengths in which one is interested: the norm used in this study focuses on the troposphere and may underestimate the impact of observations to which there is sensitivity at higher altitudes.

The total forecast impact is approximated by a global sum of the observation impact, which will be called the total observation impact hereafter. The term observation impact will refer to partial sums of the observation impact over various subsets, unless otherwise specified. Observation impact can be used to assess the relative importance of each observation type within the context of this experiment. However, because observation impact depends on the data-accumulation period, it should not be compared directly with that of experiments looking at different periods. It is more appropriate to compare the mean impact per observation.

Observation impacts were produced for the period 1800 UTC 22 August–1200 UTC 29 September 2010 at 6-h intervals. These times are nominal analysis times, with observations being used within ±3 h of the analysis time. Results from this dataset were also presented in Lorenc and Marriott (2013).

The impact of different subsets of observations assimilated in the experiments has been evaluated in several different ways: by subtype, by platform, by technique, by MetOp sensor, and by satellite type (i.e., operational or research), as explained below and as described in detail in Table 2:

  • subtype—the impact of observations from the major categories of space-based and surface-based observations;

  • platform—the impact of data from each satellite platform;

  • technique—the impact of data from each satellite observing technique: MWSs, IRSs, SCAT, GPSRO, imager/AMV and MWI;

  • MetOp sensor—the impact of data from each sensor on board MetOp-A (i.e., IASI, AMSU-A, ASCAT, MHS, HIRS, and GRAS); and

  • operational/research—the impact of data from operational and research satellites, which include EOS (Aqua and Terra), ERS-2, Coriolis, COSMIC, and GRACE. All other satellites are considered as operational.

Table 2.

Detailed observations for each subset compared.

Detailed observations for each subset compared.
Detailed observations for each subset compared.

It should be noted that the sonde subtype in Table 2 includes the impact of wind profilers and that the surface land subtype includes “BOGUS” data. This does not affect the qualitative interpretation of figures as wind profiler and bogus impacts are almost negligible in this study.

It should also be noted that the SSMIS channels used in this study (channels 2–11, 9–16, and 21–23) give these data the characteristics of an SSM/I-like microwave imager and an AMSU-A/MHS-like microwave sounder. There is also additional temperature sounding capability in the upper stratosphere and mesosphere, but the impact of these channels will not be measured by our FSO technique's tropospheric energy norm. Because of the relatively low impact of SSMIS, and for simplicity, it has been categorized as MWI for the purposes of this study.

In this study, an “observation” is defined as the total observational information from a single sensor at a single horizontal location (e.g., for MWS or IRS techniques) and refers to all the channels assimilated; for the GPSRO technique, observation refers to all parts of the occultation assimilated.

3. Results

a. Observation impact by subtype

Figure 1 shows the observation impact of the observation subsets described by the subtype column in Table 2. Of all the observation categories, MetOp has the largest impact on global forecast error reduction. Its contribution in reducing the short-range forecast error is about 25% of the total observation impact of all assimilated observations (Fig. 1). The observation impact of satellite data dominates the surface-based observations; about 64% of the short-range forecast-error reduction comes from satellite observations and the other 36% from surface-based observations. The impacts of surface-based observation subtypes are led by sonde (15%), followed by aircraft (10%), land surface (7%), and sea surface (4%). The impact of satellite observations comes mainly from LEO satellites, including MetOp and NOAA. LEOs contribute about 56% of the total observation impact on short-range NWP forecast error while geosynchronous (GEO) satellites contribute only about 6%.

Fig. 1.

Comparison of the observation impact of major categories of observations, as specified by the subtype column of Table 2. The fraction of the total observation impact is expressed as a percentage.

Fig. 1.

Comparison of the observation impact of major categories of observations, as specified by the subtype column of Table 2. The fraction of the total observation impact is expressed as a percentage.

b. Observation impact by platform

The observation impact of each satellite platform is evaluated and the results are shown in Fig. 2. MetOp-A is measured as having the largest impact of any satellite platform (39% of the observation impact of all satellite platforms), followed by NOAA and Aqua. The observation impact of NOAA-16 is very small because only AVHRR AMVs are assimilated during the period of this experiment. Meteosat shows the strongest impact among GEO satellites, its impact here being mainly due to AMVs. (The SEVIRI CLR impact from Meteosat is an order of magnitude lower than its AMV impact.) Note that, to calculate the mean AMV impacts per observation in this study, U and V components have not been counted as separate observations but rather each UV pair has been considered to constitute a single observation.

Fig. 2.

As in Fig. 1, but for the platform categories described in Table 2. The fraction of the total satellite observation impact is expressed as a percentage.

Fig. 2.

As in Fig. 1, but for the platform categories described in Table 2. The fraction of the total satellite observation impact is expressed as a percentage.

The observation impact and mean impact per observation of AMVs are shown in Fig. 3. This statistic shows us whether the large observation impact of Meteosat (compared with some other GEOs) is due to the large impact of individual observations or to large numbers of observations. The observation impacts of MSG (Meteosat-9) and MTSAT are larger than for the other GEO satellites whereas NOAA AVHRR shows a very small impact (Fig. 3a). Mean observation impacts, however, are fairly similar across all platforms (Fig. 3b) and surprisingly similar given the different ways in which AMVs are generated. This indicates that the number of observations is the main factor in determining the AMV impact of each platform in the Met Office global NWP system.

Fig. 3.

The AMV impact on the forecast error reduction between platforms. The (a) observation impact and (b) mean impact per observation.

Fig. 3.

The AMV impact on the forecast error reduction between platforms. The (a) observation impact and (b) mean impact per observation.

c. Observation impact by technique

The observation impact of satellite by technique is shown in Fig. 4. The microwave and infrared sounders together are measured as having an impact of about 78% of the total satellite observation impact; 43% is from microwave soundings with the remaining 35% from infrared soundings. The imagers account for 10%, followed by scatterometers (5%) and GPSRO (4%) (Fig. 4a). By contrast, GPSRO data give the largest mean impact per observation (Fig. 4b).

Fig. 4.

Comparison of the impacts from observations of each satellite observing technique as described by the technique column of Table 2. The (a) observation impact and (b) mean impact per observation.

Fig. 4.

Comparison of the impacts from observations of each satellite observing technique as described by the technique column of Table 2. The (a) observation impact and (b) mean impact per observation.

In Fig. 4, the IRS instruments show a smaller impact than the MWS instruments. However, it is necessary to distinguish between the more modern hyperspectral infrared sounders (IASI and AIRS) and the older instruments, such as HIRS. Figure 5 shows the observation impact and mean impact per observation of each sounder in this study. The impacts per sounding of the hyperspectral IR sounders, MetOp-A/IASI and Aqua/AIRS, are larger than those of the microwave sounders. The observation impact of NOAA-18 AMSU-A is smaller than for other AMSU-A instruments, as the number of NOAA-18 soundings used is less.

Fig. 5.

(a) Observation impact and (b) mean impact per observation of the instruments using MWS and IRS techniques.

Fig. 5.

(a) Observation impact and (b) mean impact per observation of the instruments using MWS and IRS techniques.

d. Observation impact for operational–research subsets

The continuity of those components of the observing system funded by research programs cannot be guaranteed. These results categorize impacts such that we can look at the proportions of impacts of operational and of research-funded satellites. This information may be important in deciding how to design the satellite observing system in the future as various research missions come to an end. The satellites in each category are listed in section 2c(e). Satellites categorized as operational were found to give an impact 4 times larger than that of research satellites. Most of the contribution of research satellites is from Aqua/AIRS, as was shown in Figs. 2 and 5.

e. Observation impact for MetOp sensors

The impact of each sensor on board MetOp-A is compared in Fig. 6. IASI is the most valuable sensor on MetOp-A, giving about 49% of the observation impact of MetOp-A data, followed by AMSU-A (31%), ASCAT (13%), MHS (3%), GRAS (2%), and HIRS (2%). The relatively small observation impacts of HIRS and MHS are also seen in equivalent plots for NOAA series satellites (not shown here). The observation impacts of AMSU-A, MHS, and HIRS on board MetOp-A are similar to those for NOAA satellites, as was shown in Fig. 5a. The leading role of MetOp-A in reducing the forecast error, compared with the NOAA series satellites, is mainly due to the additional instruments: IASI, ASCAT, and GRAS.

Fig. 6.

As in Fig. 4, but for the MetOp sensor categories.

Fig. 6.

As in Fig. 4, but for the MetOp sensor categories.

GRAS shows the largest mean impact per observation among MetOp's sensors. However, comparing Fig. 6b with Fig. 4b, GRAS data are shown to have a smaller mean impact per observation than other GPSRO data. This is partly because GRAS data are not yet used below 10 km, where GPSRO shows strong beneficial impact (Cardinali 2009).

4. Discussion

The observation impact of MetOp-A data is the largest among all of the satellite categories in this study despite our NOAA category including data from five satellites (NOAA-15 to -19). The large observation impact of MetOp-A compared to NOAA is mainly due to the additional sensors on the platform (IASI, ASCAT, and GRAS). The observation impacts of AMSU-A, MHS, and HIRS on board MetOp-A are similar to those of individual NOAA satellites. This suggests that, even though AMSU-A, MHS, and HIRS on board MetOp-A observe at locations very close to IASI in space and time, their impact on forecast error is not significantly diminished by the additional information from IASI.

The overall observation impact of surface-based observations is smaller than that of satellite data and has become smaller, in relative terms, following the advent of new satellite data such as IASI and AIRS. It should be stressed, however, that surface-based observations show large mean observation impacts per observation. The observation impact per sounding for sonde is about 10 times larger than that for MetOp-A in this study and is larger than that for GPSRO (not shown). Each sonde observation has a significant impact on reducing the forecast error.

This study measures the forecast error reduction in the troposphere only (from surface to 150 hPa). Observations that have a strong impact on the upper atmosphere will not, therefore, be fully represented by the FSO measure used here. For example, GPSRO observes very well at levels above the troposphere. Other studies have shown a stronger impact in the stratosphere than in the troposphere (Cardinali 2009).

The Meteosat AMVs have an overall strong beneficial observation impact in the Met Office system. However, Cardinali (2009) showed some degradation of NWP forecast accuracy due to AMVs derived from the visible and infrared frequency bands at levels below 700 hPa from Meteosat in the ECMWF system. Gelaro et al. (2010), in work comparing the observation impact between the global NWP systems (at NRL, GMAO, and Environment Canada), noticed that the benefit of AMV data is quite different depending on the NWP system. The strong beneficial contribution of Meteosat AMVs in the Met Office system appears to be caused by the aggregation of small contributions from a large number of observations, as described in section 3b.

Despite the limited sample of impacts studied here, we find our results to be fairly robust. This has been verified in three ways: by calculating standard errors, by checking for changes in mean impacts with time within the period, and by comparison with impacts for a contrasting period. For the observation subsets listed in Table 2, we found standard errors in the mean impacts to be relatively small. When expressed as fractions of the mean impacts per observation, standard errors ranged from 0.0077 (for MetOp-A/IASI) to 0.1377 (for NOAA-15/AVHRR); that is, by this measure each of the mean impacts is statistically significant. However, standard error does not take into account correlations between the errors of impact estimates. Where errors are correlated, mean impacts will be biased and standard error will underestimate the true error in the mean. Such correlations arise when certain atmospheric phenomena span sets of observations in space and time, causing similar errors in the FSO calculation. Figure 7 shows the impacts for the first and second halves of the period, categorized as in Fig. 1. By splitting the impacts in this way, we attempt to avoid some of the temporal correlation in errors. We see that there is little difference in the impacts between the two halves of the period for the categories in Figs. 1 and 7. This is generally true for all observation subtypes studied here. (Some differences are seen in the AMV impacts in Fig. 3 where there are lower numbers of observations.)

Fig. 7.

As in Fig. 1, but for different samples. Summer is the same sample as in Fig. 1 (from 1800 UTC 22 Aug to 1200 UTC 29 Sep 2010), winter is for the period from 1800 UTC 30 Jan to 0000 UTC 18 Mar 2012, first half of summer is for the period from 1800 UTC 22 Aug to 0600 UTC 10 Sep 2010, and the second half of summer is from 1200 UTC 10 Sep to 1200 UTC 29 Sep 2010.

Fig. 7.

As in Fig. 1, but for different samples. Summer is the same sample as in Fig. 1 (from 1800 UTC 22 Aug to 1200 UTC 29 Sep 2010), winter is for the period from 1800 UTC 30 Jan to 0000 UTC 18 Mar 2012, first half of summer is for the period from 1800 UTC 22 Aug to 0600 UTC 10 Sep 2010, and the second half of summer is from 1200 UTC 10 Sep to 1200 UTC 29 Sep 2010.

We see a similarly robust result when comparing impacts for different seasons. Results in Cardinali (2009) show that, for global NWP, impacts remain broadly similar across seasons provided that the same data types are assimilated. To verify this and to support conclusions drawn from the primary dataset studied here, we calculated observation impacts using an FSO dataset for the period 1800 UTC 30 January–0000 UTC 18 March 2012. For the sake of simplicity the primary dataset will hereafter be referred to as summer and the additional dataset as winter. FSO results for the winter period were calculated using a similar global NWP system as for the summer period. However, there are a few variations in the observation subtypes assimilated and error covariances used in the assimilation were updated using a hybrid ensemble–4DVAR technique. A detailed description of the Met Office hybrid ensemble–4DVAR global data assimilation system is documented in Clayton et al. (2013). A comparison of these impacts with those originally presented in Fig. 1 can be seen in Fig. 7. Features in the observation impacts for the winter period are generally similar to those presented in section 3 of this paper despite the aforementioned changes to the system. The most noticeable difference is seen for surface land observations. The greater impact of surface types is mainly due to the assimilation of additional METAR observations during the winter period. Probably the most notable difference in other subtypes (not shown) is an improvement in the impact of SSMIS observations. The impact of the SSMIS observations was lower during the summer period due to known problems with data quality (Lorenc and Marriott 2013).

These complementary investigations demonstrate that there is a high degree of robustness in the results presented in this study. Variations in the magnitude of impacts between periods appear to be sufficiently small to validate the conclusions that have been drawn.

5. Summary

In this study, observation impacts on 24-h forecast error reduction are evaluated using the adjoint-based FSO method developed within the Met Office NWP system. Observation impacts are produced for the period 1800 UTC 22 August–1200 UTC 29 September 2010 at 6-h intervals using the version of the NWP system that was operational at the Met Office from 16 March 2011.

Results show that satellite data account for 64% of the short-range forecast error reduction, with the remaining 36% coming from surface-based observation types. MetOp-A data are measured to have the largest impact of any individual satellite platform (about 25% of the total impact on global forecast error reduction). Their leading role, compared with NOAA satellites, is mainly due to MetOp-A's additional sensors (IASI, ASCAT, and GRAS). Radiosonde profiles give the largest impact among surface-based observation types, followed by aircraft, land surface, and sea surface observations. Even though the total impact of the surface-based observations is smaller than that of the satellite data, the observation impact per profile for radiosondes is about 10 times larger than that for an average MetOp-A sounding.

Microwave and hyperspectral infrared sounding systems are found to give the largest total impacts. However, of the satellite observations, the GPSRO data are measured to have the largest mean impact per observation. In general, it is operational satellites, rather than research satellites, that generate the most forecast error reduction. The Aqua/AIRS instrument, however, was found to have an observation impact comparable to that of operational satellite sounders.

This paper deals with observation impact in an average sense, assessing the overall performance of the satellite data within the context of a state-of-the-art NWP system. However, the impact of satellite data will vary depending on surface properties, cloud interactions, observing time, and so on. The effects of these varying conditions are not explored in this study. As mentioned previously, adjoint-based FSO methods can potentially measure forecast impacts for any subset of observations. We intend to use this tool to investigate further the impact of satellite data as a function of these parameters and to provide guidance to improve the use of current satellite data. We also expect the results to contribute to discussions on the future development of observing systems.

Acknowledgments

The authors wish to thank Andrew Lorenc for his initial development of the adjoint-based sensitivity tools used in this study. S. Joo was funded by the Korean Government Long-Term Fellowship for Overseas Studies (2010-E-0127).

APPENDIX

Glossary of Terminology and Abbreviations

REFERENCES

REFERENCES
Bouttier
,
F.
, and
G.
Kelly
,
2001
:
Observation-system experiments in the ECMWF 4D-Var data assimilation system
.
Quart. J. Roy. Meteor. Soc.
,
127
,
1469
1488
.
Cardinali
,
C.
,
2009
:
Forecast sensitivity to observation (FSO) as a diagnostic tool. ECMWF Tech. Memo. 599, 26 pp
.
Clayton
,
A. M.
,
A. C.
Lorenc
, and
D. M.
Barker
,
2013
:
Operational implementation of a hybrid ensemble/4D-Var global data assimilation system at the Met Office
.
Quart. J. Roy. Meteor. Soc.
,
doi:10.1002/qj.2054, in press
.
Eyre
,
J.
, and
Coauthors
,
2008
:
Impact studies with satellite observations at the Met Office. Proc. Fourth WMO Workshop on the Impact of Various Observing Systems on NWP, Geneva, Switzerland, WMO, 120–128. [Available online at http://www.wmo.int/pages/prog/www/OSY/Reports/NWP-4_Geneva2008_index.html.]
Gelaro
,
R.
,
Y.
Zhu
, and
R. M.
Errico
,
2007
:
Examination of various-order adjoint-based approximations of observation impact
.
Meteor. Z.
,
16
,
685
692
.
Gelaro
,
R.
,
R. H.
Langland
,
S.
Pellerin
, and
R.
Todling
,
2010
:
The THORPEX Observation Impact Intercomparison Experiment
.
Mon. Wea. Rev.
,
138
,
4009
4025
.
Hilton
,
F.
,
N. C.
Atkinson
,
S. J.
English
, and
J. R.
Eyre
,
2009
:
Assimilation of IASI at the Met Office and assessment of its impact through observing system experiments
.
Quart. J. Roy. Meteor. Soc.
,
135
,
495
505
.
Kelly
,
G.
, and
J.-N.
Thépaut
,
2007
:
Evaluation of the impact of the space component of the Global Observation System through observing system experiments. ECMWF Newsletter, No. 113, ECMWF, Reading, United Kingdom, 16–28. [Available online at http://www.ecmwf.int/publications/newsletters/pdf/113.pdf.]
Langland
,
R. H.
, and
N.
Baker
,
2004
:
Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system
.
Tellus
,
56A
,
189
201
.
Lorenc
,
A. C.
, and
R. T.
Marriott
,
2013
:
Forecast sensitivity to observations in the Met Office Global NWP system
.
Quart. J. Roy. Meteor. Soc.
,
doi:10.1002/qj.2122, in press
.
Pavelin
,
E. G.
,
S. J.
English
, and
J. R.
Eyre
,
2008
:
The assimilation of cloud-affected infrared satellite radiances for numerical weather prediction
.
Quart. J. Roy. Meteor. Soc.
,
134
,
739
751
.