Search Results
You are looking at 1 - 10 of 34 items for
- Author or Editor: John M. Lewis x
- Refine by Access: All Content x
Abstract
Sasaki's variational analysis method is used to describe the subsynoptic surface conditions accompanying severe local storms. Observations are extracted from the network of surface stations that routinely report every hour. The variational analysis filters the observations by constraining the meteorological fields to satisfy a set of governing prognostic equations. The filtering is monotonic and is designed to admit space and time scales of the order of 500 km and 10 hr, respectively.
The analysis is applied to a severe storm situation on June 10, 1968. The development of an intense squall line from the incipient to mature stage is depicted by an index coupling vertical motion and surface moisture. The results demonstrate that dynamically consistent time continuity can be achieved by using the variational method.
Abstract
Sasaki's variational analysis method is used to describe the subsynoptic surface conditions accompanying severe local storms. Observations are extracted from the network of surface stations that routinely report every hour. The variational analysis filters the observations by constraining the meteorological fields to satisfy a set of governing prognostic equations. The filtering is monotonic and is designed to admit space and time scales of the order of 500 km and 10 hr, respectively.
The analysis is applied to a severe storm situation on June 10, 1968. The development of an intense squall line from the incipient to mature stage is depicted by an index coupling vertical motion and surface moisture. The results demonstrate that dynamically consistent time continuity can be achieved by using the variational method.
Philip Thompson (1922–94) pioneered innovative approaches to weather analysis and prediction that blended determinism and probability. He generally posed problems in terms of simplified dynamics that were amenable to analytic solution. His preciseness in problem formulation and presentation in a forceful didactical manner are linked to his early home-schooling and experiences with a coterie of young intellectuals. Four of Thompson's contributions are examined with the intention of highlighting their impact on the current state of operational analysis and prediction.
Philip Thompson (1922–94) pioneered innovative approaches to weather analysis and prediction that blended determinism and probability. He generally posed problems in terms of simplified dynamics that were amenable to analytic solution. His preciseness in problem formulation and presentation in a forceful didactical manner are linked to his early home-schooling and experiences with a coterie of young intellectuals. Four of Thompson's contributions are examined with the intention of highlighting their impact on the current state of operational analysis and prediction.
Abstract
The generation of a probabilistic view of dynamical weather prediction is traced back to the early 1950s, to that point in time when deterministic short-range numerical weather prediction (NWP) achieved its earliest success. Eric Eady was the first meteorologist to voice concern over strict determinism—that is, a future determined by the initial state without account for uncertainties in that state. By the end of the decade, Philip Thompson and Edward Lorenz explored the predictability limits of deterministic forecasting and set the stage for an alternate view—a stochastic–dynamic view that was enunciated by Edward Epstein.
The steps in both operational short-range NWP and extended-range forecasting that justified a coupling between probability and dynamical law are followed. A discussion of the bridge from theory to practice follows, and the study ends with a genealogy of ensemble forecasting as an outgrowth of traditions in the history of science.
Abstract
The generation of a probabilistic view of dynamical weather prediction is traced back to the early 1950s, to that point in time when deterministic short-range numerical weather prediction (NWP) achieved its earliest success. Eric Eady was the first meteorologist to voice concern over strict determinism—that is, a future determined by the initial state without account for uncertainties in that state. By the end of the decade, Philip Thompson and Edward Lorenz explored the predictability limits of deterministic forecasting and set the stage for an alternate view—a stochastic–dynamic view that was enunciated by Edward Epstein.
The steps in both operational short-range NWP and extended-range forecasting that justified a coupling between probability and dynamical law are followed. A discussion of the bridge from theory to practice follows, and the study ends with a genealogy of ensemble forecasting as an outgrowth of traditions in the history of science.
Meteorologist Carl-Gustaf Rossby is examined as a mentor. In order to evaluate him, the mentor–protégé concept is discussed with the benefit of existing literature on the subject and key examples from the recent history of science. In addition to standard source material, oral histories and letters of reminiscence from approximately 25 former students and associates have been used.
The study indicates that Rossby expected an unusually high degree of independence on the part of his protégés, but that he was exceptional in his ability to engage the protégés on an intellectual basis—to scientifically excite them on issues of importance to him. Once they were entrained, however, Rossby was not inclined to follow their work closely.
He surrounded himself with a cadre of exceptional teachers who complemented his own heuristic style, and he further used his influence to establish a steady stream of first-rate visitors to the institutes. In this environment that bristled with ideas and discourse, the protégés thrived.
A list of Rossby's protégés and the titles of their doctoral dissertations are also included.
Meteorologist Carl-Gustaf Rossby is examined as a mentor. In order to evaluate him, the mentor–protégé concept is discussed with the benefit of existing literature on the subject and key examples from the recent history of science. In addition to standard source material, oral histories and letters of reminiscence from approximately 25 former students and associates have been used.
The study indicates that Rossby expected an unusually high degree of independence on the part of his protégés, but that he was exceptional in his ability to engage the protégés on an intellectual basis—to scientifically excite them on issues of importance to him. Once they were entrained, however, Rossby was not inclined to follow their work closely.
He surrounded himself with a cadre of exceptional teachers who complemented his own heuristic style, and he further used his influence to establish a steady stream of first-rate visitors to the institutes. In this environment that bristled with ideas and discourse, the protégés thrived.
A list of Rossby's protégés and the titles of their doctoral dissertations are also included.
In the late 1960s, well before the availability of computer power to produce ensemble weather forecasts, Edward Epstein (1931–2008) developed a stochastic–dynamic prediction (SDP) method for calculating the temporal evolution of mean value, variance, and covariance of the model variables: the statistical moments of a time-varying probability density function that define an ensemble forecast. This statistical–dynamical approach to ensemble forecasting is an alternative to the Monte Carlo formulation that is currently used in operations. The stages of Epstein's career that led to his development of this methodology are presented with the benefit of his oral history and supporting documentation that describes the retreat of strict deterministic weather forecasting. The important follow-on research by two of Epstein's protégés, Rex Fleming and Eric Pitcher, is also presented.
A low-order nonlinear dynamical system is used to discuss the rudiments of SDP and Monte Carlo and to compare these approximate methods with the exact solution found by solving Liouville's equation. Graphical results from these various methods of solution are found in the main body of the paper while mathematical development is contained in an online supplement. The paper ends with a discussion of SDP's strengths and weaknesses and its possible future as an operational and research tool in probabilistic–dynamic weather prediction.
In the late 1960s, well before the availability of computer power to produce ensemble weather forecasts, Edward Epstein (1931–2008) developed a stochastic–dynamic prediction (SDP) method for calculating the temporal evolution of mean value, variance, and covariance of the model variables: the statistical moments of a time-varying probability density function that define an ensemble forecast. This statistical–dynamical approach to ensemble forecasting is an alternative to the Monte Carlo formulation that is currently used in operations. The stages of Epstein's career that led to his development of this methodology are presented with the benefit of his oral history and supporting documentation that describes the retreat of strict deterministic weather forecasting. The important follow-on research by two of Epstein's protégés, Rex Fleming and Eric Pitcher, is also presented.
A low-order nonlinear dynamical system is used to discuss the rudiments of SDP and Monte Carlo and to compare these approximate methods with the exact solution found by solving Liouville's equation. Graphical results from these various methods of solution are found in the main body of the paper while mathematical development is contained in an online supplement. The paper ends with a discussion of SDP's strengths and weaknesses and its possible future as an operational and research tool in probabilistic–dynamic weather prediction.
Abstract
Inaccuracy in the numerical prediction of the moisture content of return-flow air over the Gulf of Mexico continues to plague operational forecasters. At the Environmental Modeling Center/National Centers for Environmental Prediction in the United States, the prediction errors have exhibited bias—typically too dry in the early 1990s and too moist from the mid-1990s to present. This research explores the possible sources of bias by using a Lagrangian formulation of the classic mixed-layer model. Justification for use of this low-order model rests on careful examination of the upper-air thermodynamic structure in a well-observed event during the Gulf of Mexico Experiment. The mixed-layer constraints are shown to be appropriate for the first phase of return flow, namely, the northerly-flow or outflow phase. The theme of the research is estimation of sensitivity—change in the model output (at termination of outflow) in response to inaccuracies or uncertainties in the elements of the control vector (the initial conditions, the boundary conditions, and the physical and empirical parameters). The first stage of research explores this sensitivity through a known analytic solution to a reduced form of the mixed-layer equations. Numerically calculated sensitivity (via Runge–Kutta integration of the equations) is compared to the exact values and found to be most credible. Further, because the first- and second-order terms in the solution about the base state can be found exactly for the analytic case, the degree of nonlinearity in the dynamical system can be determined. It is found that the system is “weakly nonlinear”; that is, solutions that result from perturbations to the control vector are well approximated by the first-order terms in the Taylor series expansion. This bodes well for the sensitivity analysis. The second stage of research examines sensitivity for the general case that includes moisture and imposed subsidence. Results indicate that uncertainties in the initial conditions are significant, yet they are secondary to uncertainties in the boundary conditions and physical/empirical parameters. The sea surface temperatures and associated parameters, the saturation mixing ratio at the sea surface and the turbulent transfer coefficient, exert the most influence on the moisture forecast. Uncertainty in the surface wind speed is also shown to be a major source of systematic error in the forecast. By assuming errors in the elements of the control vector that reflect observational error and uncertainties in the parameters, the bias error in the moisture forecast is estimated. These bias errors are significantly greater than random errors as explored through Monte Carlo experiments. Bias errors of 1–2 g kg−1 in the moisture forecast are possible through a variety of systematic errors in the control vector. The sensitivity analysis also makes it clear that judiciously chosen incorrect specifications of the elements can offset each other and lead to a good moisture forecast. The paper ends with a discussion of research approaches that hold promise for improved operational forecasts of moisture in return-flow events.
Abstract
Inaccuracy in the numerical prediction of the moisture content of return-flow air over the Gulf of Mexico continues to plague operational forecasters. At the Environmental Modeling Center/National Centers for Environmental Prediction in the United States, the prediction errors have exhibited bias—typically too dry in the early 1990s and too moist from the mid-1990s to present. This research explores the possible sources of bias by using a Lagrangian formulation of the classic mixed-layer model. Justification for use of this low-order model rests on careful examination of the upper-air thermodynamic structure in a well-observed event during the Gulf of Mexico Experiment. The mixed-layer constraints are shown to be appropriate for the first phase of return flow, namely, the northerly-flow or outflow phase. The theme of the research is estimation of sensitivity—change in the model output (at termination of outflow) in response to inaccuracies or uncertainties in the elements of the control vector (the initial conditions, the boundary conditions, and the physical and empirical parameters). The first stage of research explores this sensitivity through a known analytic solution to a reduced form of the mixed-layer equations. Numerically calculated sensitivity (via Runge–Kutta integration of the equations) is compared to the exact values and found to be most credible. Further, because the first- and second-order terms in the solution about the base state can be found exactly for the analytic case, the degree of nonlinearity in the dynamical system can be determined. It is found that the system is “weakly nonlinear”; that is, solutions that result from perturbations to the control vector are well approximated by the first-order terms in the Taylor series expansion. This bodes well for the sensitivity analysis. The second stage of research examines sensitivity for the general case that includes moisture and imposed subsidence. Results indicate that uncertainties in the initial conditions are significant, yet they are secondary to uncertainties in the boundary conditions and physical/empirical parameters. The sea surface temperatures and associated parameters, the saturation mixing ratio at the sea surface and the turbulent transfer coefficient, exert the most influence on the moisture forecast. Uncertainty in the surface wind speed is also shown to be a major source of systematic error in the forecast. By assuming errors in the elements of the control vector that reflect observational error and uncertainties in the parameters, the bias error in the moisture forecast is estimated. These bias errors are significantly greater than random errors as explored through Monte Carlo experiments. Bias errors of 1–2 g kg−1 in the moisture forecast are possible through a variety of systematic errors in the control vector. The sensitivity analysis also makes it clear that judiciously chosen incorrect specifications of the elements can offset each other and lead to a good moisture forecast. The paper ends with a discussion of research approaches that hold promise for improved operational forecasts of moisture in return-flow events.
Abstract
The dynamical adjustment scheme of P.D. Thompson (1969) has been adapted to the two-parameter baroclinic model which has potential vorticity as the constraint. In contrast to Thompson's approach, which used a differential-difference form of the constraint in space-time, the governing equations are discretized. Analyses simulated from analytic functions and analyses derived at the National Meteorological Center (NMC) are used to test the adjustment procedure. The reduction in error variance is related to the characteristics of the analysis error and the consequences of discretization, i.e., truncation error in the constraint and associated Euler–Lagrange equations.
The principal results are as follows:
1) Significant reduction in mean square error of vorticity can be accomplished with systematic or random error sources when r = |V| Δt/Δs < 1, where |V| is the geostrophic advection speed, Δt is one-half the time interval between maps, and Δs is the spatial resolution along the steering contours.
2) The limit of error reduction is reached as r→0, and the limiting values obtained from experiment compare favorably with the theoretical results of Thompson.
3) Height fields that are post-processed from adjusted vorticities also exhibit reduced error variance.
4) Results from the two-parameter model indicate that the strategy of adjustment will be useful in assimilating a sequence of mean temperature (thickness) fields derived from the VISSR Atmospheric Sounder (VAS) which is to be carded on all GOES satellites during this decade.
Abstract
The dynamical adjustment scheme of P.D. Thompson (1969) has been adapted to the two-parameter baroclinic model which has potential vorticity as the constraint. In contrast to Thompson's approach, which used a differential-difference form of the constraint in space-time, the governing equations are discretized. Analyses simulated from analytic functions and analyses derived at the National Meteorological Center (NMC) are used to test the adjustment procedure. The reduction in error variance is related to the characteristics of the analysis error and the consequences of discretization, i.e., truncation error in the constraint and associated Euler–Lagrange equations.
The principal results are as follows:
1) Significant reduction in mean square error of vorticity can be accomplished with systematic or random error sources when r = |V| Δt/Δs < 1, where |V| is the geostrophic advection speed, Δt is one-half the time interval between maps, and Δs is the spatial resolution along the steering contours.
2) The limit of error reduction is reached as r→0, and the limiting values obtained from experiment compare favorably with the theoretical results of Thompson.
3) Height fields that are post-processed from adjusted vorticities also exhibit reduced error variance.
4) Results from the two-parameter model indicate that the strategy of adjustment will be useful in assimilating a sequence of mean temperature (thickness) fields derived from the VISSR Atmospheric Sounder (VAS) which is to be carded on all GOES satellites during this decade.
Abstract
The interaction between a squall line and its environment is examined by using the model of Ogura and Cho (1973). This model incorporates a continuous spectrum of cumulus clouds that are distinguished by their entrainment rates. Conversion of liquid water droplets into raindrops has been included in the cloud microphysical process, but the ice phase has been neglected. By virtue of the cloud spectrum, convective transport terms in the larger scale heat and moisture equations appear as functions of vertical mass flux within the clouds. Once the larger-scale distributions are determined from observations, the vertical mass flux can be found from the budget equations. The cloud populations, i.e., fractional area covered by each cloud category, and the cumulative rainfall rate are functions of this vertical mass flux.
A squall line observed in the National Severe Storms Laboratory (NSSL) network on 8 June 1966 is used to test the theory. This squall line encompassed approximately 10% of the area used in the budget calculations. Observed heat and moisture distributions in the larger scale environment of the squall line are explained in terms of the cumulus processes. A comparison between the theoretically-derived cloud population and observed population was made possible by the WSR-57 radar at NSSL. Cloud population was estimated using precipitation reflectivity data from hourly tilt sequences of this 10 cm radar. The observed and theoretical distribution of clouds compared favorably on 1) the relative frequency of tall clouds, and 2) total areal coverage by clouds.
Abstract
The interaction between a squall line and its environment is examined by using the model of Ogura and Cho (1973). This model incorporates a continuous spectrum of cumulus clouds that are distinguished by their entrainment rates. Conversion of liquid water droplets into raindrops has been included in the cloud microphysical process, but the ice phase has been neglected. By virtue of the cloud spectrum, convective transport terms in the larger scale heat and moisture equations appear as functions of vertical mass flux within the clouds. Once the larger-scale distributions are determined from observations, the vertical mass flux can be found from the budget equations. The cloud populations, i.e., fractional area covered by each cloud category, and the cumulative rainfall rate are functions of this vertical mass flux.
A squall line observed in the National Severe Storms Laboratory (NSSL) network on 8 June 1966 is used to test the theory. This squall line encompassed approximately 10% of the area used in the budget calculations. Observed heat and moisture distributions in the larger scale environment of the squall line are explained in terms of the cumulus processes. A comparison between the theoretically-derived cloud population and observed population was made possible by the WSR-57 radar at NSSL. Cloud population was estimated using precipitation reflectivity data from hourly tilt sequences of this 10 cm radar. The observed and theoretical distribution of clouds compared favorably on 1) the relative frequency of tall clouds, and 2) total areal coverage by clouds.
SMAGORINSKY'S GFDL
Building the Team
Joseph Smagorinsky (1924–2005) was a forceful and powerful figure in meteorology during the last half of the twentieth century. He served as director of the Geophysical Fluid Dynamics Laboratory (GFDL) for nearly 30 yr (1955–83); and during his tenure as director, this organization substantially contributed to advances in weather forecasting and climate diagnostics/prediction. The purpose of this research is to explore Smagorinsky s philosophy of science and style of management which were central to the success of GFDL. Information herein comes from his early scientific publications, personal letters and notes in the possession of his family, several oral histories, and letters of reminiscence from scientists who worked within and outside GFDL.
The principal results of the study are that 1) early inspiration and development of Smagorinsky's scientific philosophy came from his contact with Jule Charney and Harry Wexler, 2) his doctoral dissertation ideally prepared him for appointment as director of the U.S. Weather Bureau's long-range numerical prediction project in 1955—the General Circulation Research Section (later renamed GFDL), 3) he masterfully assembled a team of researchers to attack the challenging problem of general circulation modeling, and 4) he exhibited an authoritarian style of rule tempered by protection of the scientists from disrupting outside influence while celebrating the elitism and esprit de corps that characterized the laboratory.
A list of Smagorinsky's management principles is found in the appendix. Several of these tenets have been interspersed in the main body of the paper in support of actions he took at GFDL.
Joseph Smagorinsky (1924–2005) was a forceful and powerful figure in meteorology during the last half of the twentieth century. He served as director of the Geophysical Fluid Dynamics Laboratory (GFDL) for nearly 30 yr (1955–83); and during his tenure as director, this organization substantially contributed to advances in weather forecasting and climate diagnostics/prediction. The purpose of this research is to explore Smagorinsky s philosophy of science and style of management which were central to the success of GFDL. Information herein comes from his early scientific publications, personal letters and notes in the possession of his family, several oral histories, and letters of reminiscence from scientists who worked within and outside GFDL.
The principal results of the study are that 1) early inspiration and development of Smagorinsky's scientific philosophy came from his contact with Jule Charney and Harry Wexler, 2) his doctoral dissertation ideally prepared him for appointment as director of the U.S. Weather Bureau's long-range numerical prediction project in 1955—the General Circulation Research Section (later renamed GFDL), 3) he masterfully assembled a team of researchers to attack the challenging problem of general circulation modeling, and 4) he exhibited an authoritarian style of rule tempered by protection of the scientists from disrupting outside influence while celebrating the elitism and esprit de corps that characterized the laboratory.
A list of Smagorinsky's management principles is found in the appendix. Several of these tenets have been interspersed in the main body of the paper in support of actions he took at GFDL.