# Search Results

## You are looking at 1 - 10 of 17 items for

- Author or Editor: Patricia M. Pauley x

- Refine by Access: All Content x

## Abstract

The spectral response of the Barnes objective analysis scheme near data boundaries is the focus of this note. First of all, a modification of the results presented by Achtemeier is described. In order for the weighted sum (or integral) defining the Barnes scheme to provide an unbiased estimate of the field at grid points, the sum (or integral) of the normalized weights must equal one. The normalizing factor is therefore written as an integral whose limits of integration are kept identical to those for the weighted integral of observations defining the scheme, even as the integral is truncated near a boundary. This modification serves to phrase the theoretical form of the Barnes scheme in a manner that is more consistent with the commonly used discrete form of the scheme. The amplitude and phase-shifted responses using the proposed normalization at an interpolation point on a boundary differ from Achtemeier's results by a factor of two.

The amplitude and phase-shifted responses for a discrete application of the scheme are also examined using the Barnes scheme cast in rectangular coordinates. The amplitude and phase-shifted responses are integrated both using a small sampling interval to approximate the continuous case and using larger sampling intervals representative of typical observation spacings. These discrete results show that the phase shift near boundaries can be reduced by using a larger nondimensional sampling interval (or equivalently, a smaller smoothing scale length). However, this is at the expense of increasing the amplitude response of aliased unresolvable wavelengths. An estimate of the response at the boundary made from gridded values at the boundary confirms the discrete estimate of the response and the proposed modification of Achtemeier's results.

## Abstract

The spectral response of the Barnes objective analysis scheme near data boundaries is the focus of this note. First of all, a modification of the results presented by Achtemeier is described. In order for the weighted sum (or integral) defining the Barnes scheme to provide an unbiased estimate of the field at grid points, the sum (or integral) of the normalized weights must equal one. The normalizing factor is therefore written as an integral whose limits of integration are kept identical to those for the weighted integral of observations defining the scheme, even as the integral is truncated near a boundary. This modification serves to phrase the theoretical form of the Barnes scheme in a manner that is more consistent with the commonly used discrete form of the scheme. The amplitude and phase-shifted responses using the proposed normalization at an interpolation point on a boundary differ from Achtemeier's results by a factor of two.

The amplitude and phase-shifted responses for a discrete application of the scheme are also examined using the Barnes scheme cast in rectangular coordinates. The amplitude and phase-shifted responses are integrated both using a small sampling interval to approximate the continuous case and using larger sampling intervals representative of typical observation spacings. These discrete results show that the phase shift near boundaries can be reduced by using a larger nondimensional sampling interval (or equivalently, a smaller smoothing scale length). However, this is at the expense of increasing the amplitude response of aliased unresolvable wavelengths. An estimate of the response at the boundary made from gridded values at the boundary confirms the discrete estimate of the response and the proposed modification of Achtemeier's results.

## Abstract

Difficulty analyzing mesoscale features in California and Nevada for a 1991 case study prompted a review of techniques for sea level pressure (SLP) reduction and an evaluation of the performance of the various techniques for the U.S. west coast states at 0000 UTC 30 November 1991. The objective of any SLP reduction procedure is to provide a pressure field that portrays meteorological features rather than terrain features, a difficult goal to meet in this region given the steep terrain gradients on the western slopes of the Sierra Nevada range. The review and evaluation are performed both for techniques applicable at individual stations and for techniques applicable at grid points in a model analysis or forecast.

When using station data, one would like to perform a manual or objective analysis of SLP with the greatest number of stations possible by adding stations that report only altimeter setting to the stations that report both SLP and altimeter setting. The results of the comparison show that the incorporation of altimeter-setting stations into an analysis of SLP was found to be practical only at elevations less than 300 m. Above this, the standard reduction includes empirical corrections that cannot be easily duplicated, and the other reduction techniques yielded values that varied over a large enough range that the uncertainty associated with the choice of technique becomes too great to permit the analysis of weak mesoscale features. At such low elevations, the various techniques examined gave similar results; therefore, the simple reduction is recommended. In elevated plateau regions, a pressure analysis on a geopotential surface at approximately the mean terrain height is recommended to minimize reduction errors. No satisfactory solution was found for regions with steep terrain gradients.

Computing SLP from model objective analyses or forecasts that are in the model’s native vertical coordinate, typically the terrain-following sigma coordinate, poses a different set of problems. The model terrain field is usually smoothed and so contains regions where it differs significantly from the actual terrain. This is sufficient in itself to yield reduction errors that have a coherent mesoscale signature. In addition, SLP fields computed using available techniques vary widely in areas of higher terrain elevation, sometimes producing mesoscale features that suspiciously coincide with terrain features and so suggest reduction error. These mesoscale pressure artifacts are also often associated with unrealistic geostrophic wind speed maxima. The Mesinger method of defining the below-ground temperature field by horizontal interpolation across terrain features after interpolating the model sigma-level objective analyses to pressure surfaces worked best for this case. It produced values that agreed reasonably well with the manual SLP analysis and with the 1300-m pressure analysis over Nevada, without generating an artificial geostrophic wind speed maximum.

## Abstract

Difficulty analyzing mesoscale features in California and Nevada for a 1991 case study prompted a review of techniques for sea level pressure (SLP) reduction and an evaluation of the performance of the various techniques for the U.S. west coast states at 0000 UTC 30 November 1991. The objective of any SLP reduction procedure is to provide a pressure field that portrays meteorological features rather than terrain features, a difficult goal to meet in this region given the steep terrain gradients on the western slopes of the Sierra Nevada range. The review and evaluation are performed both for techniques applicable at individual stations and for techniques applicable at grid points in a model analysis or forecast.

When using station data, one would like to perform a manual or objective analysis of SLP with the greatest number of stations possible by adding stations that report only altimeter setting to the stations that report both SLP and altimeter setting. The results of the comparison show that the incorporation of altimeter-setting stations into an analysis of SLP was found to be practical only at elevations less than 300 m. Above this, the standard reduction includes empirical corrections that cannot be easily duplicated, and the other reduction techniques yielded values that varied over a large enough range that the uncertainty associated with the choice of technique becomes too great to permit the analysis of weak mesoscale features. At such low elevations, the various techniques examined gave similar results; therefore, the simple reduction is recommended. In elevated plateau regions, a pressure analysis on a geopotential surface at approximately the mean terrain height is recommended to minimize reduction errors. No satisfactory solution was found for regions with steep terrain gradients.

Computing SLP from model objective analyses or forecasts that are in the model’s native vertical coordinate, typically the terrain-following sigma coordinate, poses a different set of problems. The model terrain field is usually smoothed and so contains regions where it differs significantly from the actual terrain. This is sufficient in itself to yield reduction errors that have a coherent mesoscale signature. In addition, SLP fields computed using available techniques vary widely in areas of higher terrain elevation, sometimes producing mesoscale features that suspiciously coincide with terrain features and so suggest reduction error. These mesoscale pressure artifacts are also often associated with unrealistic geostrophic wind speed maxima. The Mesinger method of defining the below-ground temperature field by horizontal interpolation across terrain features after interpolating the model sigma-level objective analyses to pressure surfaces worked best for this case. It produced values that agreed reasonably well with the manual SLP analysis and with the 1300-m pressure analysis over Nevada, without generating an artificial geostrophic wind speed maximum.

## Abstract

The primary goal of this paper is to diagnose, the “direct” and “indirect” effects of latent heat release on a synoptic-scale wave system containing an extratropical cyclone that developed over the eastern United States. To achieve this goal, comparisons are made between MOIST (full model physics) and DRY (latent heating removed) predictions of the wave system during the period 27–29 February 1984 using the National Meteorological Center's Limited-Area Fine Mesh (LFM) model. Both the MOIST and DRY models predict significant cyclone systems, suggesting that the background adiabatic forcing is quite important. However, the DRY model predict a weaker cyclone.

The direct and indirect latent heating influences are diagnosed using eddy energy quantities and the extended height tendency equation. Direct effects are restricted to the diabatic generation of available potential energy and height tendencies forced by diabatic heating. Results show that latent heating exerts an important direct influence on the wave system's evolution, particularly Mow 500 mb. However, latent heating also influences other variables (e.g., temperature, geopotential height, wind speed, and vertical motion), which in turn lead to significant indirect enhancements of other parameters, notably those associated with the baroclinic conversion of potential to kinetic energy during the system's development and height tendencies forced by horizontal temperature advection and vertical static stability advection.

## Abstract

The primary goal of this paper is to diagnose, the “direct” and “indirect” effects of latent heat release on a synoptic-scale wave system containing an extratropical cyclone that developed over the eastern United States. To achieve this goal, comparisons are made between MOIST (full model physics) and DRY (latent heating removed) predictions of the wave system during the period 27–29 February 1984 using the National Meteorological Center's Limited-Area Fine Mesh (LFM) model. Both the MOIST and DRY models predict significant cyclone systems, suggesting that the background adiabatic forcing is quite important. However, the DRY model predict a weaker cyclone.

The direct and indirect latent heating influences are diagnosed using eddy energy quantities and the extended height tendency equation. Direct effects are restricted to the diabatic generation of available potential energy and height tendencies forced by diabatic heating. Results show that latent heating exerts an important direct influence on the wave system's evolution, particularly Mow 500 mb. However, latent heating also influences other variables (e.g., temperature, geopotential height, wind speed, and vertical motion), which in turn lead to significant indirect enhancements of other parameters, notably those associated with the baroclinic conversion of potential to kinetic energy during the system's development and height tendencies forced by horizontal temperature advection and vertical static stability advection.

## Abstract

The effect of resolution on the depiction of central sea level pressure for an intense oceanic extratropical cyclone is examined through a one-dimensional Fourier analysis. Profiles of sea level pressure were manually interpolated along the latitude passing through the storm center from two subjective analyses and the 00-, 24-, and 48-h NMC Nested-Grid Model (NGM) forecasts, all valid at 0000 UTC 5 January 1989. At this time, the Experiment on Rapidly Intensifying Cyclones over the Atlantic (ERICA) intensive observing period 4 (IOP 4) cyclone attained its maximum intensity, with a central pressure of 936 mb at 41°N, 58°W in an analysis prepared by Frederick Sanders.

After the Fourier coefficients were determined for each pressure profile they were used to recompute a series of pressure profiles truncated at various maximum wavenumbers λ_{max}. The central sea level pressures obtained from these truncated profiles asymptotically approach the central pressure of the original profile as λ_{max} increases. An effective resolution is defined as the λ_{max} at which the truncated central pressure comes within 1 mb of the original central pressure. This investigation reveals an effective resolution for the NGM of approximately wave-number 50 (604 km at 41°N) compared to wavenumber 100 (302 km at 41°N) for the hand analyses. A similar examination of the magnitude and location of the maximum *v* component of the geostrophic wind computed from the pressure profiles supports these estimates of effective resolution. However, maximum gradient wind speed was found to be relatively insensitive to resolution, while maximum geostrophic relative vorticity was overly sensitive to resolution. Consequently, neither of the latter two were used to derive effective resolutions.

The recomputed pressure profiles truncated at wavenumber 50 are virtually identical to the original profiles for the NGM cases. However, the truncated pressure profiles for the hand analyses yield central pressures that approach the NGM 00-h value. Thus, resolution differences alone account for approximately half of the 20-mb central-pressure error in the NGM 24- and 48-h forecasts, while apparent deficiencies in the model, initial conditions, or boundary conditions must account for the other half. In other words, the best performance that could be expected from the current NGM configuration is a central pressure of approximately 945 mb rather than 936 mb, assuming the same background pressure.

## Abstract

The effect of resolution on the depiction of central sea level pressure for an intense oceanic extratropical cyclone is examined through a one-dimensional Fourier analysis. Profiles of sea level pressure were manually interpolated along the latitude passing through the storm center from two subjective analyses and the 00-, 24-, and 48-h NMC Nested-Grid Model (NGM) forecasts, all valid at 0000 UTC 5 January 1989. At this time, the Experiment on Rapidly Intensifying Cyclones over the Atlantic (ERICA) intensive observing period 4 (IOP 4) cyclone attained its maximum intensity, with a central pressure of 936 mb at 41°N, 58°W in an analysis prepared by Frederick Sanders.

After the Fourier coefficients were determined for each pressure profile they were used to recompute a series of pressure profiles truncated at various maximum wavenumbers λ_{max}. The central sea level pressures obtained from these truncated profiles asymptotically approach the central pressure of the original profile as λ_{max} increases. An effective resolution is defined as the λ_{max} at which the truncated central pressure comes within 1 mb of the original central pressure. This investigation reveals an effective resolution for the NGM of approximately wave-number 50 (604 km at 41°N) compared to wavenumber 100 (302 km at 41°N) for the hand analyses. A similar examination of the magnitude and location of the maximum *v* component of the geostrophic wind computed from the pressure profiles supports these estimates of effective resolution. However, maximum gradient wind speed was found to be relatively insensitive to resolution, while maximum geostrophic relative vorticity was overly sensitive to resolution. Consequently, neither of the latter two were used to derive effective resolutions.

The recomputed pressure profiles truncated at wavenumber 50 are virtually identical to the original profiles for the NGM cases. However, the truncated pressure profiles for the hand analyses yield central pressures that approach the NGM 00-h value. Thus, resolution differences alone account for approximately half of the 20-mb central-pressure error in the NGM 24- and 48-h forecasts, while apparent deficiencies in the model, initial conditions, or boundary conditions must account for the other half. In other words, the best performance that could be expected from the current NGM configuration is a central pressure of approximately 945 mb rather than 936 mb, assuming the same background pressure.

## Abstract

Large-scale departures from quasigeostrophic vertical motions are diagnosed for a model simulation of the *QE II* storm (9–11 September 1978). The simulation was performed by the Limited-Area Mesoscale Prediction System (LAMPS), initialized at 1200 UTC 9 September 1978. The model cyclone intensified from a central pressure of 1003 mb to 976 mb in 24 h, considerably short of the 59 mb (24 h)^{−1} observed deepening but reasonable in comparison to other model simulations of this storm. This diagnosis centers on a hydrostatic generalized omega equation, which scales to the quasigeostiophic omega equation for small Rossby number. Vertical motions were computed both from this generalized omega equation and the quasigeostrophic omega equation in order to examine the importance of nonquasigeostrophic effects. The high correlation of vertical motions from a control experiment (using most of the terms in the generalized omega equation) with the vertical motions predicted by the model establishes the validity of the method. A further comparison against satellite imagery also shows that these computed vertical motions portray a pattern similar to the satellite cloud shield. However, the pattern and magnitude of the quasigeostrophic vertical motions are quite different from those of the generalized vertical motions. An evaluation of individual terms in the generalized equation shows that although additional terms in omega placed on the left-hand side significantly affect the magnitude of the vertical motion, the greatest nonquasigeostrophic effects are provided by the diabatic term and the ageostrophic advections. Latent heating greatly enhances the upward motion in the cyclone’s cloud shield, while ageostrophic advections both suppress downward motion behind the cold front and enhance upward motion near the warm front.

## Abstract

Large-scale departures from quasigeostrophic vertical motions are diagnosed for a model simulation of the *QE II* storm (9–11 September 1978). The simulation was performed by the Limited-Area Mesoscale Prediction System (LAMPS), initialized at 1200 UTC 9 September 1978. The model cyclone intensified from a central pressure of 1003 mb to 976 mb in 24 h, considerably short of the 59 mb (24 h)^{−1} observed deepening but reasonable in comparison to other model simulations of this storm. This diagnosis centers on a hydrostatic generalized omega equation, which scales to the quasigeostiophic omega equation for small Rossby number. Vertical motions were computed both from this generalized omega equation and the quasigeostrophic omega equation in order to examine the importance of nonquasigeostrophic effects. The high correlation of vertical motions from a control experiment (using most of the terms in the generalized omega equation) with the vertical motions predicted by the model establishes the validity of the method. A further comparison against satellite imagery also shows that these computed vertical motions portray a pattern similar to the satellite cloud shield. However, the pattern and magnitude of the quasigeostrophic vertical motions are quite different from those of the generalized vertical motions. An evaluation of individual terms in the generalized equation shows that although additional terms in omega placed on the left-hand side significantly affect the magnitude of the vertical motion, the greatest nonquasigeostrophic effects are provided by the diabatic term and the ageostrophic advections. Latent heating greatly enhances the upward motion in the cyclone’s cloud shield, while ageostrophic advections both suppress downward motion behind the cold front and enhance upward motion near the warm front.

## Abstract

No abstract available

## Abstract

No abstract available

## Abstract

No abstract available.

## Abstract

No abstract available.

## Abstract

This paper examines the response of the Barnes objective analysis scheme as a function of wavenumber or wavelength and extends previous work in two primary areas. First, the first- and second-pass theoretical response functions for continuous two-dimensional (2-D) fields are derived using Fourier transforms and compared with Barnes' (1973) responses for one-dimensional (1-D) waves. All responses are nondimensionalized with respect to a smoothing scale length, such that the first-pass responses are a function only of nondimensional wavelength. The 2-D response is of the same functional form as the 1-D response, with the 2-D wavenumber substituted for the 1-D wavenumber. The 2-D response departs significantly from the 1-D value (for the same *x*-component of the wavelength) when the *y*-component of the wavelength is less than approximately ten scale lengths, a condition applying to most fields with closed centers as well as open waves with a significant latitudinal variation.

Second, the continuous theoretical response for 1-D and 2-D waves is compared with the response for discrete applications of the scheme using uniformly spaced observations. This response is evaluated two different ways. The discrete theoretical response is found by discretizing the Barnes scheme with a 2-D “comb” function for cases in which interpolation points are either coincident with or midway between observation points. The response can then be evaluated with Fourier transforms in much the same way as for the continuous case. The discrete response evaluated in this manner attains a minimum at the Nyquist wavelength, with enhanced values for smaller unresolvable wavelengths resulting from aliasing. The discrete response is approximately equal to the sum of the responses for the original wavelength and for a primary aliased wavelength. At larger resolvable wavelengths, the discrete response approaches the continuous value as aliasing becomes negligible.

The second means of examining the response for discrete applications is through a Fourier series analysis of fields interpolated by the Barnes scheme. The “observations” in this context are given by analytic functions on a uniform mesh which may or may not differ from the analysis grid. A given input wavelength leads to an analysis which contains waves at one or more aliased wavelengths in addition to the original wavelength. Components of the response corresponding to each of these wavelengths can be estimated as the Fourier amplitude of the interpolated field divided by the amplitude of the input wave. The actual response from a discrete application of the Barnes scheme confirms the results of the analysis of the discrete theoretical response; aliasing to longer wavelengths is seen for nonresolvable wavelengths in the “observations,” while the actual response is close to the theoretical value for well resolved wavelengths for both 1-D and 2-D fields.

This analysis of the discrete and actual response supports the recommendations of Caracena et al. and Koch et al. for the relationship between the smoothing scale length and the observation spacing. Setting the smoothing scale length to approximately four-thirds of the observation spacing, the upper bound recommended by Caracena et al., yields a response close to the continuous value for resolvable wavelengths, with a reasonably small degree of aliasing. However, the case in which the smoothing scale length is equal to the observation spacing, the lower bound recommended by Caracena et al., retains an unacceptably high degree of aliasing in a typical two-pass application of the Barnes scheme.

## Abstract

This paper examines the response of the Barnes objective analysis scheme as a function of wavenumber or wavelength and extends previous work in two primary areas. First, the first- and second-pass theoretical response functions for continuous two-dimensional (2-D) fields are derived using Fourier transforms and compared with Barnes' (1973) responses for one-dimensional (1-D) waves. All responses are nondimensionalized with respect to a smoothing scale length, such that the first-pass responses are a function only of nondimensional wavelength. The 2-D response is of the same functional form as the 1-D response, with the 2-D wavenumber substituted for the 1-D wavenumber. The 2-D response departs significantly from the 1-D value (for the same *x*-component of the wavelength) when the *y*-component of the wavelength is less than approximately ten scale lengths, a condition applying to most fields with closed centers as well as open waves with a significant latitudinal variation.

Second, the continuous theoretical response for 1-D and 2-D waves is compared with the response for discrete applications of the scheme using uniformly spaced observations. This response is evaluated two different ways. The discrete theoretical response is found by discretizing the Barnes scheme with a 2-D “comb” function for cases in which interpolation points are either coincident with or midway between observation points. The response can then be evaluated with Fourier transforms in much the same way as for the continuous case. The discrete response evaluated in this manner attains a minimum at the Nyquist wavelength, with enhanced values for smaller unresolvable wavelengths resulting from aliasing. The discrete response is approximately equal to the sum of the responses for the original wavelength and for a primary aliased wavelength. At larger resolvable wavelengths, the discrete response approaches the continuous value as aliasing becomes negligible.

The second means of examining the response for discrete applications is through a Fourier series analysis of fields interpolated by the Barnes scheme. The “observations” in this context are given by analytic functions on a uniform mesh which may or may not differ from the analysis grid. A given input wavelength leads to an analysis which contains waves at one or more aliased wavelengths in addition to the original wavelength. Components of the response corresponding to each of these wavelengths can be estimated as the Fourier amplitude of the interpolated field divided by the amplitude of the input wave. The actual response from a discrete application of the Barnes scheme confirms the results of the analysis of the discrete theoretical response; aliasing to longer wavelengths is seen for nonresolvable wavelengths in the “observations,” while the actual response is close to the theoretical value for well resolved wavelengths for both 1-D and 2-D fields.

This analysis of the discrete and actual response supports the recommendations of Caracena et al. and Koch et al. for the relationship between the smoothing scale length and the observation spacing. Setting the smoothing scale length to approximately four-thirds of the observation spacing, the upper bound recommended by Caracena et al., yields a response close to the continuous value for resolvable wavelengths, with a reasonably small degree of aliasing. However, the case in which the smoothing scale length is equal to the observation spacing, the lower bound recommended by Caracena et al., retains an unacceptably high degree of aliasing in a typical two-pass application of the Barnes scheme.

## Abstract

Distance-dependent weighted averaging (DDWA) is a process that is fundamental to most of the objective analysis schemes that are used in meteorology. Despite its ubiquity, aspects of its effects are still poorly understood. This is especially true for the most typical situation of observations that are discrete, bounded, and irregularly distributed.

To facilitate understanding of the effects of DDWA schemes, a framework that enables the determination of response functions for arbitrary weight functions and data distributions is developed. An essential element of this approach is the equivalent analysis, which is a hypothetical analysis that is produced by using, throughout the analysis domain, the same weight function and data distribution that apply at the point where the response function is desired. This artifice enables the derivation of the response function by way of the convolution theorem. Although this approach requires a bit more effort than an alternative one, the reward is additional insight into the impacts of DDWA analyses.

An important insight gained through this approach is the exact nature of the DDWA response function. For DDWA schemes the response function is the complex conjugate of the normalized Fourier transform of the effective weight function. In facilitating this result, this approach affords a better understanding of which elements (weight functions, data distributions, normalization factors, etc.) affect response functions and how they interact to do so.

Tests of the response function for continuous, bounded data and discrete, irregularly distributed data verify the validity of the response functions obtained herein. They also reinforce previous findings regarding the dependence of response functions on analysis location and the impacts of data boundaries and irregular data spacing.

Interpretation of the response function in terms of amplitude and phase modulations is illustrated using examples. Inclusion of phase shift information is important in the evaluation of DDWA schemes when they are applied to situations that may produce significant phase shifts. These situations include those where data boundaries influence the analysis value and where data are irregularly distributed. By illustrating the attendant movement, or shift, of data, phase shift information also provides an elegant interpretation of extrapolation.

## Abstract

Distance-dependent weighted averaging (DDWA) is a process that is fundamental to most of the objective analysis schemes that are used in meteorology. Despite its ubiquity, aspects of its effects are still poorly understood. This is especially true for the most typical situation of observations that are discrete, bounded, and irregularly distributed.

To facilitate understanding of the effects of DDWA schemes, a framework that enables the determination of response functions for arbitrary weight functions and data distributions is developed. An essential element of this approach is the equivalent analysis, which is a hypothetical analysis that is produced by using, throughout the analysis domain, the same weight function and data distribution that apply at the point where the response function is desired. This artifice enables the derivation of the response function by way of the convolution theorem. Although this approach requires a bit more effort than an alternative one, the reward is additional insight into the impacts of DDWA analyses.

An important insight gained through this approach is the exact nature of the DDWA response function. For DDWA schemes the response function is the complex conjugate of the normalized Fourier transform of the effective weight function. In facilitating this result, this approach affords a better understanding of which elements (weight functions, data distributions, normalization factors, etc.) affect response functions and how they interact to do so.

Tests of the response function for continuous, bounded data and discrete, irregularly distributed data verify the validity of the response functions obtained herein. They also reinforce previous findings regarding the dependence of response functions on analysis location and the impacts of data boundaries and irregular data spacing.

Interpretation of the response function in terms of amplitude and phase modulations is illustrated using examples. Inclusion of phase shift information is important in the evaluation of DDWA schemes when they are applied to situations that may produce significant phase shifts. These situations include those where data boundaries influence the analysis value and where data are irregularly distributed. By illustrating the attendant movement, or shift, of data, phase shift information also provides an elegant interpretation of extrapolation.