Search Results
You are looking at 1 - 8 of 8 items for
- Author or Editor: Edward H. Barker x
- Refine by Access: All Content x
Abstract
The aspects of the navy's multivariate optimum interpolation (MVOI) for atmospheric analysis are presented. Included is an overview of how the MVOI is used in the navy's data-assimilation system, and the basic design of the MVOI. The specific features described include: a cursory presentation of the optimization method and some of its deficiencies, the application of the volume method in data selection and program design, the structure models used to represent the prediction errors, and the quality control checks applied to the observations.
Validation experiments that illustrate some of the features of the navy's analysis system are presented. Experiments showing the exactness of the geostrophic constraint, the effect of correlated observation error, the advantage of the geostrophic constraint, and the impact of satellite temperatures on the analysis are presented. These experiments were necessary to catch minor design and programming errors in the analysis system that are too small to be detected through casual inspection, yet which degrade the quality of the analysis. It has been shown that implementation of the volume method has given more precise geostrophic coupling over the gridpoint method, and that inputting satellite temperatures as pressure thicknesses rather than pressure-level heights produces results that agree with the satellite-derived layer thickness values, while it ties the analysis to observations of pressure heights. Finally, the validation experiments were shown to be highly effective at removing subtle errors in the analysis system, which led to rapid implementation and an extended error-free operational lifetime.
Abstract
The aspects of the navy's multivariate optimum interpolation (MVOI) for atmospheric analysis are presented. Included is an overview of how the MVOI is used in the navy's data-assimilation system, and the basic design of the MVOI. The specific features described include: a cursory presentation of the optimization method and some of its deficiencies, the application of the volume method in data selection and program design, the structure models used to represent the prediction errors, and the quality control checks applied to the observations.
Validation experiments that illustrate some of the features of the navy's analysis system are presented. Experiments showing the exactness of the geostrophic constraint, the effect of correlated observation error, the advantage of the geostrophic constraint, and the impact of satellite temperatures on the analysis are presented. These experiments were necessary to catch minor design and programming errors in the analysis system that are too small to be detected through casual inspection, yet which degrade the quality of the analysis. It has been shown that implementation of the volume method has given more precise geostrophic coupling over the gridpoint method, and that inputting satellite temperatures as pressure thicknesses rather than pressure-level heights produces results that agree with the satellite-derived layer thickness values, while it ties the analysis to observations of pressure heights. Finally, the validation experiments were shown to be highly effective at removing subtle errors in the analysis system, which led to rapid implementation and an extended error-free operational lifetime.
Abstract
Analyzing and balancing wind and mass fields on constant pressure surfaces and then interpolating the results to model coordinates cause significant errors, which lower verification scores and create inertial gravity noise. Although interpolation of geopotential to model coordinates produces less error, the computation of temperature with the model finite difference equations may lead to very large errors, as demonstrated by computation with standard atmosphere profiles.
The solution for temperature using a variational formalism together with the model hydrostatic equation provides a method that greatly decreases the error in computation of pressure height by the model. The procedure is derived and results given for two different forms of Arakawa's hydrostatic equation. One of these forms an ill-conditioned equation set when geopotential is used to compute temperature. The results show that a significant decrease in the errors of geopotential produced by the model occurs when the variational procedure is used.
Abstract
Analyzing and balancing wind and mass fields on constant pressure surfaces and then interpolating the results to model coordinates cause significant errors, which lower verification scores and create inertial gravity noise. Although interpolation of geopotential to model coordinates produces less error, the computation of temperature with the model finite difference equations may lead to very large errors, as demonstrated by computation with standard atmosphere profiles.
The solution for temperature using a variational formalism together with the model hydrostatic equation provides a method that greatly decreases the error in computation of pressure height by the model. The procedure is derived and results given for two different forms of Arakawa's hydrostatic equation. One of these forms an ill-conditioned equation set when geopotential is used to compute temperature. The results show that a significant decrease in the errors of geopotential produced by the model occurs when the variational procedure is used.
Abstract
Empirical formulas which have been derived to describe the profiles of wind, temperature and humidity through the atmospheric surface boundary layer (SBL) are used to derive equations predicting the fluxes of momentum, heat and moisture through the SBL. These formulas can be applied in the computation of lower boundary conditions needed for the diffusion equation in planetary boundary layer models.
Abstract
Empirical formulas which have been derived to describe the profiles of wind, temperature and humidity through the atmospheric surface boundary layer (SBL) are used to derive equations predicting the fluxes of momentum, heat and moisture through the SBL. These formulas can be applied in the computation of lower boundary conditions needed for the diffusion equation in planetary boundary layer models.
Abstract
A two-dimensional axially symmetric computer model was used to study downwash-induced fog clearings. In order to produce a clearing the helicopter downwash must reach the ground while the helicopter hovers at or above the top of the fog. The major factors affecting the size and penetration of the downwash are the strength of the helicopter and the buoyancy of the downwash. The clearing can be enlarged beyond the size of the primary downwash by surface-induced divergence and by mixing of dry air into the fog.
Abstract
A two-dimensional axially symmetric computer model was used to study downwash-induced fog clearings. In order to produce a clearing the helicopter downwash must reach the ground while the helicopter hovers at or above the top of the fog. The major factors affecting the size and penetration of the downwash are the strength of the helicopter and the buoyancy of the downwash. The clearing can be enlarged beyond the size of the primary downwash by surface-induced divergence and by mixing of dry air into the fog.
Abstract
The statistical analysis of innovation (observation minus forecast) vectors is one of the most commonly used techniques for estimating observation and forecast error covariances in large-scale data assimilation. Building on the work of Hollingsworth and Lönnberg, the height innovation data over North America from the Navy Operational Global Atmospheric Prediction System (NOGAPS) are analyzed. The major products of the analysis include (i) observation error variances and vertical correlation functions, (ii) forecast error autocovariances as functions of height and horizontal distance, (iii) their spectra as functions of height and horizontal wavenumber. Applying a multilevel least squares fitting method, which is simpler and more rigorously constrained than that of Hollingsworth and Lönnberg, a full-space covariance function was determined. It was found that removal of the large-scale horizontal component, which has only small variation in the vertical, reduces the nonseparability. The results were compared with those of Hollingsworth and Lönnberg, and show a 20% overall reduction in forecast errors and a 10% overall reduction in observation errors for the NOGAPS data in comparison with the ECMWF global model data 16 yr ago.
Abstract
The statistical analysis of innovation (observation minus forecast) vectors is one of the most commonly used techniques for estimating observation and forecast error covariances in large-scale data assimilation. Building on the work of Hollingsworth and Lönnberg, the height innovation data over North America from the Navy Operational Global Atmospheric Prediction System (NOGAPS) are analyzed. The major products of the analysis include (i) observation error variances and vertical correlation functions, (ii) forecast error autocovariances as functions of height and horizontal distance, (iii) their spectra as functions of height and horizontal wavenumber. Applying a multilevel least squares fitting method, which is simpler and more rigorously constrained than that of Hollingsworth and Lönnberg, a full-space covariance function was determined. It was found that removal of the large-scale horizontal component, which has only small variation in the vertical, reduces the nonseparability. The results were compared with those of Hollingsworth and Lönnberg, and show a 20% overall reduction in forecast errors and a 10% overall reduction in observation errors for the NOGAPS data in comparison with the ECMWF global model data 16 yr ago.
Abstract
Two versions of the NOGAPS model are used to generate normal-mode balanced datasets. The various forces that act on gravitational modes are then examined to determine the modes whose coefficient time tendencies are significantly smaller than terms which force them. Those modes are said to be balanced. Results for three different equivalent depths are presented. They indicate that only modes with natural periods shorter than one day appear balanced. That balance is adiabatic. These results agree with those reported by Errico for the NCAR CCM.
Abstract
Two versions of the NOGAPS model are used to generate normal-mode balanced datasets. The various forces that act on gravitational modes are then examined to determine the modes whose coefficient time tendencies are significantly smaller than terms which force them. Those modes are said to be balanced. Results for three different equivalent depths are presented. They indicate that only modes with natural periods shorter than one day appear balanced. That balance is adiabatic. These results agree with those reported by Errico for the NCAR CCM.
On 29 November 1991 a series of collisions involving 164 vehicles occurred on Interstate 5 in the San Joaquin Valley in California in a dust storm that reduced the visibility to near zero. The accompanying high surface winds are hypothesized to result from intense upper-tropospheric downward motion that led to the formation of a strong upper front and tropopause fold and that transported high momentum air downward to midlevels where boundary layer processes could then mix it to the surface. The objectives of the research presented in this paper are to document the event, to provide support for the hypothesis that both upper-level and boundary layer processes were important, and to determine the structure of the mesoscale circulations in this case for future use in evaluating the navy's mesoscale data assimilation system.
The strong upper-level descent present in this case is consistent with what one would expect for jet streak and frontal circulations in combination with quasigeostrophic processes. During the period examined, upper-level data and analyses portray a strong upper-tropospheric jet streak with maximum winds initially in excess of 85 m s−1 (≈170 kt) that weakened as it propagated southward around the base of a long-wave trough. The jet streak was accompanied by a strong upper front and tropopause fold, both of which imply intense downward motion. The vertical motion field near the time of the accidents had two maxima—one that was associated with a combination of quasigeostrophic forcing and terrain-induced descent in the lee of the Sierra and one that was associated with the descending branch of the secondary circulation in the jet streak exit region and the cold advection by both the geostrophic wind and the ageostrophic wind in the upper front. The 700-hPa wind speed maximum over and west of the San Joaquin Valley overlapped with the latter maximum, supporting the hypothesized role of downward momentum transport.
Given the significant 700-hPa wind speeds over the San Joaquin Valley during daytime hours on the day of the collisions, boundary layer mixing associated with solar heating of the earth's surface was then able to generate high surface winds. Once the high surface winds began, a dust storm was inevitable, since winter rains had not yet started and soil conditions were drier than usual in this sixth consecutive drought year. Surface observations from a variety of sources depict blowing dust and high surface winds at numerous locations in the San Joaquin Valley, the Mojave and other desert sites, and in the Los Angeles Basin and other south coast sites. High surface winds and low visibilities began in the late morning at desert and valley sites and lasted until just after sunset, consistent with the hypothesized heating-induced mixing. The 0000 UTC soundings in California portrayed an adiabatic layer from the surface to at least 750 hPa, also supporting the existence of mixing. On the other hand, the high winds in the Los Angeles Basin began near sunset in the wake of a propagating mesoscale trough that appeared to have formed in the lee of the mountains that separate the Los Angeles Basin from the San Joaquin Valley.
On 29 November 1991 a series of collisions involving 164 vehicles occurred on Interstate 5 in the San Joaquin Valley in California in a dust storm that reduced the visibility to near zero. The accompanying high surface winds are hypothesized to result from intense upper-tropospheric downward motion that led to the formation of a strong upper front and tropopause fold and that transported high momentum air downward to midlevels where boundary layer processes could then mix it to the surface. The objectives of the research presented in this paper are to document the event, to provide support for the hypothesis that both upper-level and boundary layer processes were important, and to determine the structure of the mesoscale circulations in this case for future use in evaluating the navy's mesoscale data assimilation system.
The strong upper-level descent present in this case is consistent with what one would expect for jet streak and frontal circulations in combination with quasigeostrophic processes. During the period examined, upper-level data and analyses portray a strong upper-tropospheric jet streak with maximum winds initially in excess of 85 m s−1 (≈170 kt) that weakened as it propagated southward around the base of a long-wave trough. The jet streak was accompanied by a strong upper front and tropopause fold, both of which imply intense downward motion. The vertical motion field near the time of the accidents had two maxima—one that was associated with a combination of quasigeostrophic forcing and terrain-induced descent in the lee of the Sierra and one that was associated with the descending branch of the secondary circulation in the jet streak exit region and the cold advection by both the geostrophic wind and the ageostrophic wind in the upper front. The 700-hPa wind speed maximum over and west of the San Joaquin Valley overlapped with the latter maximum, supporting the hypothesized role of downward momentum transport.
Given the significant 700-hPa wind speeds over the San Joaquin Valley during daytime hours on the day of the collisions, boundary layer mixing associated with solar heating of the earth's surface was then able to generate high surface winds. Once the high surface winds began, a dust storm was inevitable, since winter rains had not yet started and soil conditions were drier than usual in this sixth consecutive drought year. Surface observations from a variety of sources depict blowing dust and high surface winds at numerous locations in the San Joaquin Valley, the Mojave and other desert sites, and in the Los Angeles Basin and other south coast sites. High surface winds and low visibilities began in the late morning at desert and valley sites and lasted until just after sunset, consistent with the hypothesized heating-induced mixing. The 0000 UTC soundings in California portrayed an adiabatic layer from the surface to at least 750 hPa, also supporting the existence of mixing. On the other hand, the high winds in the Los Angeles Basin began near sunset in the wake of a propagating mesoscale trough that appeared to have formed in the lee of the mountains that separate the Los Angeles Basin from the San Joaquin Valley.
Abstract
An air–soil layer coupled scheme is developed to compute surface fluxes of sensible heat and latent heat from data collected at the Oklahoma Atmospheric Radiation Measurement–Cloud and Radiation Testbed (ARM–CART) stations. This new scheme extends the previous variational method of Xu and Qiu in two aspects: 1) it uses observed standard deviations of wind and temperature together with their similarity laws to estimate the effective roughness length, so the computed fluxes are nonlocal; that is, they contain the contributions of large-eddy motions over a nonlocal area of O(100 km2); and 2) it couples the atmospheric layer with the soil–vegetation layer and uses soil data together with the atmospheric measurements (even at a single level), so the computed fluxes are much less sensitive to measurement errors than those computed by the previous variational method. Surface skin temperature and effective roughness length are also retrieved as by-products by the new method.
Abstract
An air–soil layer coupled scheme is developed to compute surface fluxes of sensible heat and latent heat from data collected at the Oklahoma Atmospheric Radiation Measurement–Cloud and Radiation Testbed (ARM–CART) stations. This new scheme extends the previous variational method of Xu and Qiu in two aspects: 1) it uses observed standard deviations of wind and temperature together with their similarity laws to estimate the effective roughness length, so the computed fluxes are nonlocal; that is, they contain the contributions of large-eddy motions over a nonlocal area of O(100 km2); and 2) it couples the atmospheric layer with the soil–vegetation layer and uses soil data together with the atmospheric measurements (even at a single level), so the computed fluxes are much less sensitive to measurement errors than those computed by the previous variational method. Surface skin temperature and effective roughness length are also retrieved as by-products by the new method.