Search Results
You are looking at 1 - 10 of 10 items for
- Author or Editor: Thomas E. Rosmond x
- Refine by Access: All Content x
Abstract
A linear stability model for mesoscale cellular convection in the atmosphere is developed. The model includes a forcing term which is a parameterization of the net heating due to small-scale cumulus convection. A sub-cloud and cloud layer are defined, with the forcing term having non-zero values only in the cloud layer. Positive static stability is assumed in both layers so that the only source of buoyant energy is the forcing term.
The parameterization of latent heating due to cumulus convection is accomplished by assuming that the heating is proportional to the cloud-environment temperature difference and the vertical flux of moisture by the perturbation vertical velocity.
Normal mode horizontal dependence and exponential time dependence is assumed for the vertical velocity, and the forcing term is defined as proportional to the vertical velocity at the interface between the two layers. The solutions in the two layers are matched across the interface and the particular solution associated with the forcing term is expressed in terms of the arbitrary constants contained in the homogeneous solutions. This yields a homogeneous solution matrix which is solved.
Solutions are found for a wide range of values of atmospheric static stability, system depth, mean temperature and relative humidity, as well as varying degrees of anisotropy of the eddy mixing coefficients. The observed flattening of atmospheric cells, with diameter-to-depth ratios an order of magnitude greater than predicted by the stability analysis of classical Rayleigh convection, is duplicated by the model. Anisotropy of the eddy mixing coefficients is not a requirement for flattened cells in the model. The choice of boundary conditions is also of minor importance in producing cell flattening. Growth rates and preferred cell diameters are most sensitive to the relative humidity and static stability of the atmosphere. These two parameters represent, respectively, the source and sink of buoyant energy in the model. Positive static stability is responsible for cell flattening because it suppresses very strongly the relatively large vertical velocities associated with smaller cells.
Abstract
A linear stability model for mesoscale cellular convection in the atmosphere is developed. The model includes a forcing term which is a parameterization of the net heating due to small-scale cumulus convection. A sub-cloud and cloud layer are defined, with the forcing term having non-zero values only in the cloud layer. Positive static stability is assumed in both layers so that the only source of buoyant energy is the forcing term.
The parameterization of latent heating due to cumulus convection is accomplished by assuming that the heating is proportional to the cloud-environment temperature difference and the vertical flux of moisture by the perturbation vertical velocity.
Normal mode horizontal dependence and exponential time dependence is assumed for the vertical velocity, and the forcing term is defined as proportional to the vertical velocity at the interface between the two layers. The solutions in the two layers are matched across the interface and the particular solution associated with the forcing term is expressed in terms of the arbitrary constants contained in the homogeneous solutions. This yields a homogeneous solution matrix which is solved.
Solutions are found for a wide range of values of atmospheric static stability, system depth, mean temperature and relative humidity, as well as varying degrees of anisotropy of the eddy mixing coefficients. The observed flattening of atmospheric cells, with diameter-to-depth ratios an order of magnitude greater than predicted by the stability analysis of classical Rayleigh convection, is duplicated by the model. Anisotropy of the eddy mixing coefficients is not a requirement for flattened cells in the model. The choice of boundary conditions is also of minor importance in producing cell flattening. Growth rates and preferred cell diameters are most sensitive to the relative humidity and static stability of the atmosphere. These two parameters represent, respectively, the source and sink of buoyant energy in the model. Positive static stability is responsible for cell flattening because it suppresses very strongly the relatively large vertical velocities associated with smaller cells.
Abstract
The Navy Operational Global Atmospheric Prediction System (NOGAPS) has proven itself to be competitive with any of the large forecast models run by the large operational forecast centers around the world. The navy depends on NOGAPS for an astonishingly wide range of applications, from ballistic winds in the stratosphere to air-sea fluxes to drive ocean general circulation models. Users of these applications will benefit from a better understanding of how a system such as NOGAPS is developed, what physical assumptions and compromises have been made, and what they can reasonably expect in the future as the system continues to evolve.
The discussions will be equally relevant for users of products from other large forecast centers, e.g., National Meteorological Center, European Centre for Medium-Range Weather Forecasts. There is little difference in the scientific basis of the models and the development methodologies used for their development. However, the operational priorities of each center and their computer hardware and software environments often dictate what compromises are made and how model-based research is conducted. In this paper, NOGAPS will serve as the basis for discussing these issues and the art of numerical weather prediction model development.
Abstract
The Navy Operational Global Atmospheric Prediction System (NOGAPS) has proven itself to be competitive with any of the large forecast models run by the large operational forecast centers around the world. The navy depends on NOGAPS for an astonishingly wide range of applications, from ballistic winds in the stratosphere to air-sea fluxes to drive ocean general circulation models. Users of these applications will benefit from a better understanding of how a system such as NOGAPS is developed, what physical assumptions and compromises have been made, and what they can reasonably expect in the future as the system continues to evolve.
The discussions will be equally relevant for users of products from other large forecast centers, e.g., National Meteorological Center, European Centre for Medium-Range Weather Forecasts. There is little difference in the scientific basis of the models and the development methodologies used for their development. However, the operational priorities of each center and their computer hardware and software environments often dictate what compromises are made and how model-based research is conducted. In this paper, NOGAPS will serve as the basis for discussing these issues and the art of numerical weather prediction model development.
Abstract
Poisson's and Helmholtz's equations are perhaps the most frequently occurring and important types of partial differential equations encountered in the atmospheric sciences. This paper presents a very fast, accurate technique for finding the numerical solution known as cyclic reduction and factoralization. This method has not heretofore been brought to the attention of the meteorological community at large.
This direct method essentially reduces the solution of a separable two-dimensional elliptic equation on an N×M grid to N log2 N tri-diagonal systems of order M which are solved by Gaussian elimination. In its simplest form, as described here, the cyclic reduction procedure can be applied if N is 2n−1, 2n2n=1, depending on boundary conditions. However, extensions of the method have been developed which have removed this restrictive limitation. The method is also easily generalized to higher dimensional problems.
The mathematical development of the cyclic reduction method is presented here in complete detail, along with the modifications necessary to make it computationally stable. The results of two numerical experiments comparing optimized SOR versus the direct method for the solution of Poisson=s equation are presented. For Dirichlet boundary conditions the direct method is up to 50 times faster than successive over-relaxation (SOR) for N=M=128. For Neumann boundary conditions, the direct method has even a greater advantage over SOR. The margin of superiority increases as the size of the array increases.
Abstract
Poisson's and Helmholtz's equations are perhaps the most frequently occurring and important types of partial differential equations encountered in the atmospheric sciences. This paper presents a very fast, accurate technique for finding the numerical solution known as cyclic reduction and factoralization. This method has not heretofore been brought to the attention of the meteorological community at large.
This direct method essentially reduces the solution of a separable two-dimensional elliptic equation on an N×M grid to N log2 N tri-diagonal systems of order M which are solved by Gaussian elimination. In its simplest form, as described here, the cyclic reduction procedure can be applied if N is 2n−1, 2n2n=1, depending on boundary conditions. However, extensions of the method have been developed which have removed this restrictive limitation. The method is also easily generalized to higher dimensional problems.
The mathematical development of the cyclic reduction method is presented here in complete detail, along with the modifications necessary to make it computationally stable. The results of two numerical experiments comparing optimized SOR versus the direct method for the solution of Poisson=s equation are presented. For Dirichlet boundary conditions the direct method is up to 50 times faster than successive over-relaxation (SOR) for N=M=128. For Neumann boundary conditions, the direct method has even a greater advantage over SOR. The margin of superiority increases as the size of the array increases.
Abstract
A new dynamical core for numerical weather prediction (NWP) based on the spectral element method is presented. This paper represents a departure from previously published work on solving the atmospheric primitive equations in that the horizontal operators are all written, discretized, and solved in 3D Cartesian space. The advantages of using Cartesian space are that the pole singularity that plagues the equations in spherical coordinates disappears; any grid can be used, including latitude–longitude, icosahedral, hexahedral, and adaptive unstructured grids; and the conversion to a semi-Lagrangian formulation is easily achieved. The main advantage of using the spectral element method is that the horizontal operators can be approximated by local high-order elements while scaling efficiently on distributed-memory computers. In order to validate the 3D global atmospheric spectral element model, results are presented for seven test cases: three barotropic tests that confirm the exponential accuracy of the horizontal operators and four baroclinic test cases that validate the full 3D primitive hydrostatic equations. These four baroclinic test cases are the Rossby–Haurwitz wavenumber 4, the Held–Suarez test, and the Jablonowski–Williamson balanced initial state and baroclinic instability tests. Comparisons with four operational NWP and climate models demonstrate that the spectral element model is at least as accurate as spectral transform models while scaling linearly on distributed-memory computers.
Abstract
A new dynamical core for numerical weather prediction (NWP) based on the spectral element method is presented. This paper represents a departure from previously published work on solving the atmospheric primitive equations in that the horizontal operators are all written, discretized, and solved in 3D Cartesian space. The advantages of using Cartesian space are that the pole singularity that plagues the equations in spherical coordinates disappears; any grid can be used, including latitude–longitude, icosahedral, hexahedral, and adaptive unstructured grids; and the conversion to a semi-Lagrangian formulation is easily achieved. The main advantage of using the spectral element method is that the horizontal operators can be approximated by local high-order elements while scaling efficiently on distributed-memory computers. In order to validate the 3D global atmospheric spectral element model, results are presented for seven test cases: three barotropic tests that confirm the exponential accuracy of the horizontal operators and four baroclinic test cases that validate the full 3D primitive hydrostatic equations. These four baroclinic test cases are the Rossby–Haurwitz wavenumber 4, the Held–Suarez test, and the Jablonowski–Williamson balanced initial state and baroclinic instability tests. Comparisons with four operational NWP and climate models demonstrate that the spectral element model is at least as accurate as spectral transform models while scaling linearly on distributed-memory computers.
Abstract
Cloud radiative effects are represented in simulations with the general circulation model of the Navy Operational Global Atmospheric Prediction System (NOCAPS) using ingested cloud field data from the ISCCP dataset rather than model-diagnosed cloud fields. The primary objective is to investigate the extent to which the high temporal resolution ISCCP data can be used to improve the simulation of cloud radiative effects on the general circulation in GCM simulations much as observed sea surface temperatures (SSTs) have been used to avoid simulation errors resulting from inaccurately modeled SSTs. Experiments are described that examine the degree to which uncertainties in cloud field vertical structure impair the utility of the observed cloud data in this regard, as well as the extent to which unrealistic combinations of cloud radiative forcing and other physical processes may affect GCM simulations. The potential for such unrealistic combinations stems from the lack of feedback to the cloud fields in simulations using ingested cloud data in place of model-predicted cloud fields.
Simulations for the present work were carried out for three April through July periods (1986–1988) using prescribed sea surface temperatures. Analysis of the model results concentrated primarily on the month of July, allowing for a 3-month spinup period. Comparisons with ERBE data show the expected improvement in the simulation of top of the atmosphere radiation fields using the observed cloud data. Three experiments are described that examine the model sensitivity to the vertical structure assumed for the cloud fields. The authors show that although uncertainties in assumed vertical profiles of cloudiness may possibly have significant effects on certain aspects of our simulations, such effects do not appear to be large in terms of monthly mean quantities except in the case of large errors in cloud field vertical profiles. Precipitation fields are particularly insensitive to such uncertainties. A preliminary investigation of potential inaccuracies in our representation of cloud radiative effects with ISCCP data resulting from unrealistic combinations of cloud radiative forcing and other physical processes is made by comparing simulations with 3-hourly and monthly mean cloud fraction data. The authors find little difference in the simulation of monthly mean quantities in spite of large differences in the temporal variability of the imposed ISCCP-based cloud radiative forcing in these simulations. These results do not preclude the importance of simulating the correct temporal relationship between cloud radiative forcing and other physical processes in climate model simulations, but they do support the assumption that a correct simulation of that relationship is not essential for the simulation of certain monthly mean quantities. The present results point favorably to the use of the ISCCP cloud data for climate model testing, as well as further GCM experiments examining the radiative effects of clouds on the general circulation.
Abstract
Cloud radiative effects are represented in simulations with the general circulation model of the Navy Operational Global Atmospheric Prediction System (NOCAPS) using ingested cloud field data from the ISCCP dataset rather than model-diagnosed cloud fields. The primary objective is to investigate the extent to which the high temporal resolution ISCCP data can be used to improve the simulation of cloud radiative effects on the general circulation in GCM simulations much as observed sea surface temperatures (SSTs) have been used to avoid simulation errors resulting from inaccurately modeled SSTs. Experiments are described that examine the degree to which uncertainties in cloud field vertical structure impair the utility of the observed cloud data in this regard, as well as the extent to which unrealistic combinations of cloud radiative forcing and other physical processes may affect GCM simulations. The potential for such unrealistic combinations stems from the lack of feedback to the cloud fields in simulations using ingested cloud data in place of model-predicted cloud fields.
Simulations for the present work were carried out for three April through July periods (1986–1988) using prescribed sea surface temperatures. Analysis of the model results concentrated primarily on the month of July, allowing for a 3-month spinup period. Comparisons with ERBE data show the expected improvement in the simulation of top of the atmosphere radiation fields using the observed cloud data. Three experiments are described that examine the model sensitivity to the vertical structure assumed for the cloud fields. The authors show that although uncertainties in assumed vertical profiles of cloudiness may possibly have significant effects on certain aspects of our simulations, such effects do not appear to be large in terms of monthly mean quantities except in the case of large errors in cloud field vertical profiles. Precipitation fields are particularly insensitive to such uncertainties. A preliminary investigation of potential inaccuracies in our representation of cloud radiative effects with ISCCP data resulting from unrealistic combinations of cloud radiative forcing and other physical processes is made by comparing simulations with 3-hourly and monthly mean cloud fraction data. The authors find little difference in the simulation of monthly mean quantities in spite of large differences in the temporal variability of the imposed ISCCP-based cloud radiative forcing in these simulations. These results do not preclude the importance of simulating the correct temporal relationship between cloud radiative forcing and other physical processes in climate model simulations, but they do support the assumption that a correct simulation of that relationship is not essential for the simulation of certain monthly mean quantities. The present results point favorably to the use of the ISCCP cloud data for climate model testing, as well as further GCM experiments examining the radiative effects of clouds on the general circulation.
Abstract
We present a description of the development of the spectral forecast components of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The original system, called 3.0, was introduced in January 1988. New versions were introduced in March 1989 (3.1) and August 1989 (3.2). A brief description of each version of the forecast model is given. Each physical parameterization is also described. We discuss the large changes in 3.1 and the motivation behind the changes. Statistical results from forecast comparison tests are discussed. Figures showing the total monthly forecast performance in the Northern Hemisphere and the Southern Hemisphere are also given. A brief discussion is presented of computational details, running times, and memory requirements of the forecast model.
Abstract
We present a description of the development of the spectral forecast components of the Navy Operational Global Atmospheric Prediction System (NOGAPS). The original system, called 3.0, was introduced in January 1988. New versions were introduced in March 1989 (3.1) and August 1989 (3.2). A brief description of each version of the forecast model is given. Each physical parameterization is also described. We discuss the large changes in 3.1 and the motivation behind the changes. Statistical results from forecast comparison tests are discussed. Figures showing the total monthly forecast performance in the Northern Hemisphere and the Southern Hemisphere are also given. A brief discussion is presented of computational details, running times, and memory requirements of the forecast model.
Abstract
We have compared analysis increments produced by the optimal interpolation scheme and initialization increments produced by the nonlinear normal-mode initialization scheme in the U.S. Navy Operational Global Atmospheric Prediction System. Results indicate that analysis increments of height in the tropics are partially removed by the subsequent initialization. Similar results are obtained for the field of horizontal velocity divergence within the extratropics as well as tropics. Consequently, for some fields in some areas, the initialized analyses are primarily defined by the model-produced background field, irrespective of the availability of observations or model error estimates.
Abstract
We have compared analysis increments produced by the optimal interpolation scheme and initialization increments produced by the nonlinear normal-mode initialization scheme in the U.S. Navy Operational Global Atmospheric Prediction System. Results indicate that analysis increments of height in the tropics are partially removed by the subsequent initialization. Similar results are obtained for the field of horizontal velocity divergence within the extratropics as well as tropics. Consequently, for some fields in some areas, the initialized analyses are primarily defined by the model-produced background field, irrespective of the availability of observations or model error estimates.
Abstract
The effect on weather forecast performance of incorporating ensemble covariances into the initial covariance model of the four-dimensional variational data assimilation (4D-Var) Naval Research Laboratory Atmospheric Variational Data Assimilation System-Accelerated Representer (NAVDAS-AR) is investigated. This NAVDAS-AR-hybrid scheme linearly combines the static NAVDAS-AR initial background error covariance with a covariance derived from an 80-member flow-dependent ensemble. The ensemble members are generated using the ensemble transform technique with a (three-dimensional variational data assimilation) 3D-Var-based estimate of analysis error variance. The ensemble covariances are localized using an efficient algorithm enabled via a separable formulation of the localization matrix. The authors describe the development and testing of this scheme, which allows for assimilation experiments using differing linear combinations of the static and flow-dependent background error covariances. The tests are performed for two months of summer and two months of winter using operational model resolution and the operational observational dataset, which is dominated by satellite observations. Results show that the hybrid mode data assimilation scheme significantly reduces the forecast error across a wide range of variables and regions. The improvements were particularly pronounced for tropical winds. The verification against radiosondes showed a greater than 0.5% reduction in vector wind RMS differences in areas of statistical significance. The verification against self-analysis showed a greater than 1% reduction from verifying against analyses between 2- and 5-day lead time at all eight vertical levels examined in areas of statistical significance. Using the Navy's summary of verification results, the Navy Operational Global Atmospheric Prediction System (NOGAPS) scorecard, the improvements resulted in a score (+1) that justifies a major system upgrade.
Abstract
The effect on weather forecast performance of incorporating ensemble covariances into the initial covariance model of the four-dimensional variational data assimilation (4D-Var) Naval Research Laboratory Atmospheric Variational Data Assimilation System-Accelerated Representer (NAVDAS-AR) is investigated. This NAVDAS-AR-hybrid scheme linearly combines the static NAVDAS-AR initial background error covariance with a covariance derived from an 80-member flow-dependent ensemble. The ensemble members are generated using the ensemble transform technique with a (three-dimensional variational data assimilation) 3D-Var-based estimate of analysis error variance. The ensemble covariances are localized using an efficient algorithm enabled via a separable formulation of the localization matrix. The authors describe the development and testing of this scheme, which allows for assimilation experiments using differing linear combinations of the static and flow-dependent background error covariances. The tests are performed for two months of summer and two months of winter using operational model resolution and the operational observational dataset, which is dominated by satellite observations. Results show that the hybrid mode data assimilation scheme significantly reduces the forecast error across a wide range of variables and regions. The improvements were particularly pronounced for tropical winds. The verification against radiosondes showed a greater than 0.5% reduction in vector wind RMS differences in areas of statistical significance. The verification against self-analysis showed a greater than 1% reduction from verifying against analyses between 2- and 5-day lead time at all eight vertical levels examined in areas of statistical significance. Using the Navy's summary of verification results, the Navy Operational Global Atmospheric Prediction System (NOGAPS) scorecard, the improvements resulted in a score (+1) that justifies a major system upgrade.
Abstract
This paper investigates the nature of model error in complex deterministic nonlinear systems such as weather forecasting models. Forecasting systems incorporate two components, a forecast model and a data assimilation method. The latter projects a collection of observations of reality into a model state. Key features of model error can be understood in terms of geometric properties of the data projection and a model attracting manifold. Model error can be resolved into two components: a projection error, which can be understood as the model’s attractor being in the wrong location given the data projection, and direction error, which can be understood as the trajectories of the model moving in the wrong direction compared to the projection of reality into model space. This investigation introduces some new tools and concepts, including the shadowing filter, causal and noncausal shadow analyses, and various geometric diagnostics. Various properties of forecast errors and model errors are described with reference to low-dimensional systems, like Lorenz’s equations; then, an operational weather forecasting system is shown to have the same predicted behavior. The concepts and tools introduced show promise for the diagnosis of model error and the improvement of ensemble forecasting systems.
Abstract
This paper investigates the nature of model error in complex deterministic nonlinear systems such as weather forecasting models. Forecasting systems incorporate two components, a forecast model and a data assimilation method. The latter projects a collection of observations of reality into a model state. Key features of model error can be understood in terms of geometric properties of the data projection and a model attracting manifold. Model error can be resolved into two components: a projection error, which can be understood as the model’s attractor being in the wrong location given the data projection, and direction error, which can be understood as the trajectories of the model moving in the wrong direction compared to the projection of reality into model space. This investigation introduces some new tools and concepts, including the shadowing filter, causal and noncausal shadow analyses, and various geometric diagnostics. Various properties of forecast errors and model errors are described with reference to low-dimensional systems, like Lorenz’s equations; then, an operational weather forecasting system is shown to have the same predicted behavior. The concepts and tools introduced show promise for the diagnosis of model error and the improvement of ensemble forecasting systems.
Abstract
The global forecast system (GFS), which started its operation in 1988 at the Central Weather Bureau in Taiwan, has been upgraded to incorporate better numerical methods and more complete parameterization schemes. The second-generation GFS uses multivariate optimum interpolation analysis and incremental nonlinear normal-mode initialization to initialize the forecast model. The forecast model is a global primitive equation model with a resolution of 18 sigma levels in the vertical and 79 waves of triangular truncation in the horizontal. The forecast model includes a 1.5-order eddy mixing parameterization, a gravity wave drag parameterization, a shallow convection parameterization, a relaxed version of Arakawa–Schubert cumulus parameterization, grid-scale condensation calculation, and longwave and shortwave radiative transfer calculations with consideration of fractional clouds. The performance of the second-generation GFS is significantly better than the first-generation GFS. For two 3-month periods in winter 1995/96 and summer 1996, the second-generation GFS provided forecasters with 5-day forecasts where the averaged 500-mb height anomaly correlation coefficients for the Northern Hemisphere were greater than 0.6.
Observational data available to the GFS are much less than those at other numerical weather prediction centers, especially in the Tropics and Southern Hemisphere. The GRID messages of 5° resolution, ECMWF 24-h forecast 500-mb height and 850- and 200-mb wind fields available once a day on the Global Telecommunications System are used as supplemental observations to increase the data coverage for the GFS data assimilation. The supplemental data improve the GFS performance both in the analysis and forecast.
Abstract
The global forecast system (GFS), which started its operation in 1988 at the Central Weather Bureau in Taiwan, has been upgraded to incorporate better numerical methods and more complete parameterization schemes. The second-generation GFS uses multivariate optimum interpolation analysis and incremental nonlinear normal-mode initialization to initialize the forecast model. The forecast model is a global primitive equation model with a resolution of 18 sigma levels in the vertical and 79 waves of triangular truncation in the horizontal. The forecast model includes a 1.5-order eddy mixing parameterization, a gravity wave drag parameterization, a shallow convection parameterization, a relaxed version of Arakawa–Schubert cumulus parameterization, grid-scale condensation calculation, and longwave and shortwave radiative transfer calculations with consideration of fractional clouds. The performance of the second-generation GFS is significantly better than the first-generation GFS. For two 3-month periods in winter 1995/96 and summer 1996, the second-generation GFS provided forecasters with 5-day forecasts where the averaged 500-mb height anomaly correlation coefficients for the Northern Hemisphere were greater than 0.6.
Observational data available to the GFS are much less than those at other numerical weather prediction centers, especially in the Tropics and Southern Hemisphere. The GRID messages of 5° resolution, ECMWF 24-h forecast 500-mb height and 850- and 200-mb wind fields available once a day on the Global Telecommunications System are used as supplemental observations to increase the data coverage for the GFS data assimilation. The supplemental data improve the GFS performance both in the analysis and forecast.