Search Results
You are looking at 1 - 9 of 9 items for
- Author or Editor: M. STEVEN TRACTON x
- Refine by Access: All Content x
Abstract
Today, even with state-of-the-art observational, data assimilation, and modeling systems run routinely on supercomputers, there are often surprises in the prediction of snowstorms, especially the “big ones,” affecting coastal regions of the mid-Atlantic and northeastern United States. Little did the author know that lessons from Fred Sanders' synoptic meteorology class at the Massachusetts Institute of Technology (1967) would later (late 1980s) inspire him to pursue practical issues of predictability in the context of the development of ensemble prediction systems, strategies, and applications for providing information on the inevitable case-dependent uncertainties in forecasts. This paper is a brief qualitative and somewhat colloquial overview, based upon this author's personal involvement and experiences, intended to highlight some basic aspects of the source and nature of uncertainties in forecasts and to illustrate the sort of value added information ensembles can provide in dealing with uncertainties in predictions of East Coast snowstorms.
Abstract
Today, even with state-of-the-art observational, data assimilation, and modeling systems run routinely on supercomputers, there are often surprises in the prediction of snowstorms, especially the “big ones,” affecting coastal regions of the mid-Atlantic and northeastern United States. Little did the author know that lessons from Fred Sanders' synoptic meteorology class at the Massachusetts Institute of Technology (1967) would later (late 1980s) inspire him to pursue practical issues of predictability in the context of the development of ensemble prediction systems, strategies, and applications for providing information on the inevitable case-dependent uncertainties in forecasts. This paper is a brief qualitative and somewhat colloquial overview, based upon this author's personal involvement and experiences, intended to highlight some basic aspects of the source and nature of uncertainties in forecasts and to illustrate the sort of value added information ensembles can provide in dealing with uncertainties in predictions of East Coast snowstorms.
Abstract
This paper addresses that aspect of predictability which appears related to scale interaction processes in blocking. The basic approach consists of synoptic analysis and quasi-geostrophic diagnosis of the dynamical processes in the real versus forecast model atmosphere. Time–longitude plots of a blocking index, together with sequences of 500 mb height charts, are used to show the temporal and spatial relationships of circulation systems of three distinct wavebands; planetary, medium and short scale, defined as total wavenumbers 0–6, 7–12, and 13–30, respectively. The relevant quasi-geostrophic concepts and equations are formulated to permit explicit evaluation of the relative importance of barotropic versus baroclinic mechanisms and the contributions to each of scale interactive and non- (or self-) interactive processes. The case discussed here is the Atlantic/European blocking of January 1987, particularly the initial stage of development (3–12 January), and was drawn from the recent National Meteorological Center (NMC) experiment in Dynamical Extended Range Forecasting (DERF).
The observed blocking pattern at 500 mb largely reflected superposition of medium-scale waves upon a planetary-scale background conducive to blocking. Overall, short-wave features modified the amplitude of the block somewhat, but were critical for its initial appearance in the full 500 mb height field. Barotropic processes, i.e., the horizontal advection of vorticity, were dominant in direct forcing of height tendencies at this level. Baroclinic mechanisms, i.e., thermal advection, differential vorticity advection, and latent heat release, were important indirectly through modifying the barotropic effects. Scale interactions were more important in the development of the block than noninteractive processes involving circulations of a given waveband alone.
Forecasts generated during the DERF experiment with the R40 version of NMC's Medium Range forecast Model (MRF) failed totally to capture the initial development of the block beyond 3 days in advance. Diagnostic evaluation showed this reflected a feedback loop wherein errors on the subplanetary scales induced errors on the planetary circulation which then magnified subplanetary scale errors, etc. In experiments with the higher resolution T80 version of the model a blocking anticyclone was predicted even at 10 days and, concomitantly, the amplitude of this error feedback mechanism was markedly reduced.
Abstract
This paper addresses that aspect of predictability which appears related to scale interaction processes in blocking. The basic approach consists of synoptic analysis and quasi-geostrophic diagnosis of the dynamical processes in the real versus forecast model atmosphere. Time–longitude plots of a blocking index, together with sequences of 500 mb height charts, are used to show the temporal and spatial relationships of circulation systems of three distinct wavebands; planetary, medium and short scale, defined as total wavenumbers 0–6, 7–12, and 13–30, respectively. The relevant quasi-geostrophic concepts and equations are formulated to permit explicit evaluation of the relative importance of barotropic versus baroclinic mechanisms and the contributions to each of scale interactive and non- (or self-) interactive processes. The case discussed here is the Atlantic/European blocking of January 1987, particularly the initial stage of development (3–12 January), and was drawn from the recent National Meteorological Center (NMC) experiment in Dynamical Extended Range Forecasting (DERF).
The observed blocking pattern at 500 mb largely reflected superposition of medium-scale waves upon a planetary-scale background conducive to blocking. Overall, short-wave features modified the amplitude of the block somewhat, but were critical for its initial appearance in the full 500 mb height field. Barotropic processes, i.e., the horizontal advection of vorticity, were dominant in direct forcing of height tendencies at this level. Baroclinic mechanisms, i.e., thermal advection, differential vorticity advection, and latent heat release, were important indirectly through modifying the barotropic effects. Scale interactions were more important in the development of the block than noninteractive processes involving circulations of a given waveband alone.
Forecasts generated during the DERF experiment with the R40 version of NMC's Medium Range forecast Model (MRF) failed totally to capture the initial development of the block beyond 3 days in advance. Diagnostic evaluation showed this reflected a feedback loop wherein errors on the subplanetary scales induced errors on the planetary circulation which then magnified subplanetary scale errors, etc. In experiments with the higher resolution T80 version of the model a blocking anticyclone was predicted even at 10 days and, concomitantly, the amplitude of this error feedback mechanism was markedly reduced.
Abstract
The goal of this study is to determine whether cumulus convection plays a role in the development of extratropical cyclones, and if it does, to determine the nature of that role. The basic approach is to ascertain whether there is a systematic relationship between the observed extent and degree of convective activity accompanying cyclogenesis and the departure of actual storm evolution from that predicted by large-scale dynamical models.
In same instances of extratropical cyclogenesis, cumulus convection plays a crucial role in the initiation of development through the release of latent heat in the vicinity of the cyclone center. In such cases, dynamical models that do not adequately simulate convective precipitation, especially as it might occur in an environment that is unsaturated, will fail to properly forecast the onset of development.
Further evidence, either to support or refute the hypothesis, was derived from detailed analyses of seven additional storms, cursory examination of 12 others, and both qualitative and quantitative consideration of the physical mechanisms involved. Although not conclusive proof of the hypothesis, the evidence does indeed support it.
Significant convection occurred in the center of storms generally only during the early stages of their life history. Latent heat released by convective showers in the vicinity of the Low center appeared to initiate development before such development would have occurred if only the larger scale baroclinic processes were operative. Convective activity not in the immediate vicinity of the Low center did not appear crucial either to the initiation of development or to the trend of continued development following the onset of cyclogenesis.
Abstract
The goal of this study is to determine whether cumulus convection plays a role in the development of extratropical cyclones, and if it does, to determine the nature of that role. The basic approach is to ascertain whether there is a systematic relationship between the observed extent and degree of convective activity accompanying cyclogenesis and the departure of actual storm evolution from that predicted by large-scale dynamical models.
In same instances of extratropical cyclogenesis, cumulus convection plays a crucial role in the initiation of development through the release of latent heat in the vicinity of the cyclone center. In such cases, dynamical models that do not adequately simulate convective precipitation, especially as it might occur in an environment that is unsaturated, will fail to properly forecast the onset of development.
Further evidence, either to support or refute the hypothesis, was derived from detailed analyses of seven additional storms, cursory examination of 12 others, and both qualitative and quantitative consideration of the physical mechanisms involved. Although not conclusive proof of the hypothesis, the evidence does indeed support it.
Significant convection occurred in the center of storms generally only during the early stages of their life history. Latent heat released by convective showers in the vicinity of the Low center appeared to initiate development before such development would have occurred if only the larger scale baroclinic processes were operative. Convective activity not in the immediate vicinity of the Low center did not appear crucial either to the initiation of development or to the trend of continued development following the onset of cyclogenesis.
Abstract
Verification scores are presented to illustrate the general success of NMC forecasters in providing the best day 3,4, and 5 mean sea level pressure and 6–10-day mean 500-mb height fields given the operationally available array of often conflicting NWP model solutions. As a primer on NMC efforts to enhance the utility of the medium-range forecast guidance, a brief overview is provided on the rationale and expectations for ensemble prediction.
Abstract
Verification scores are presented to illustrate the general success of NMC forecasters in providing the best day 3,4, and 5 mean sea level pressure and 6–10-day mean 500-mb height fields given the operationally available array of often conflicting NWP model solutions. As a primer on NMC efforts to enhance the utility of the medium-range forecast guidance, a brief overview is provided on the rationale and expectations for ensemble prediction.
Abstract
On 7 December 1992 NMC began operational ensemble prediction. The ensemble configuration provides 14 independent forecasts every day, verifying on days 1 through 10. The ensemble members are generated through a combination of time lagging [Lagged-Average Forecasting] and a new method, Breeding of Growing Modes (Toth and Kalnay). In adopting the ensemble approach, NMC explicitly recognizes that forecasts are stochastic, not deterministic, in nature. There is no single solution, only an array of possibilities, and forecast ensembles provide a rational basis for assessing the range and likelihood of alternative scenarios.
Given the near saturation of computer resources at NMC, implementation of ensemble prediction required a trade-off between model resolution and multiple runs. Before 7 December 1992, NMC was producing a single global forecast through 10 days with the highest-resolution (T126) version possible of its medium-range forecast model. Now, based an experiments that showed no adverse impact upon the quality of forecasts, the T126 model run is truncated to T62 resolution beyond day 6. The computer savings are used to generate the balance of the ensemble members at the lower T62 resolution. While these complementary runs are, on the average, somewhat less skillful when considered individually, it is expected that ensemble averaging will increase skill levels. More importantly, we expect that ensemble prediction will enhance the utility of NWP by (a) providing a basis for the estimation of the reliability of forecasts, and (b) creating a quantitative foundation for probabilistic forecasting.
A major challenge of ensemble prediction is to condense the large amounts of information provided by ensembles into a user-friendly format that can be easily assimilated and used by forecasters. Some examples of output products relevant to operational forecast applications are illustrated. They include the display of each member of the ensemble, ensemble mean and dispersion fields, “clustering” of similar forecasts, and simple probability estimates.
While this implementation of ensemble prediction is relatively modest (ensembles of 14 members for the forecasts encompassing days 1 through 10), it does provide the basis for development of operational experience with ensemble forecasting, and for research directed toward maximizing the utility of NMC's numerical guidance.
Abstract
On 7 December 1992 NMC began operational ensemble prediction. The ensemble configuration provides 14 independent forecasts every day, verifying on days 1 through 10. The ensemble members are generated through a combination of time lagging [Lagged-Average Forecasting] and a new method, Breeding of Growing Modes (Toth and Kalnay). In adopting the ensemble approach, NMC explicitly recognizes that forecasts are stochastic, not deterministic, in nature. There is no single solution, only an array of possibilities, and forecast ensembles provide a rational basis for assessing the range and likelihood of alternative scenarios.
Given the near saturation of computer resources at NMC, implementation of ensemble prediction required a trade-off between model resolution and multiple runs. Before 7 December 1992, NMC was producing a single global forecast through 10 days with the highest-resolution (T126) version possible of its medium-range forecast model. Now, based an experiments that showed no adverse impact upon the quality of forecasts, the T126 model run is truncated to T62 resolution beyond day 6. The computer savings are used to generate the balance of the ensemble members at the lower T62 resolution. While these complementary runs are, on the average, somewhat less skillful when considered individually, it is expected that ensemble averaging will increase skill levels. More importantly, we expect that ensemble prediction will enhance the utility of NWP by (a) providing a basis for the estimation of the reliability of forecasts, and (b) creating a quantitative foundation for probabilistic forecasting.
A major challenge of ensemble prediction is to condense the large amounts of information provided by ensembles into a user-friendly format that can be easily assimilated and used by forecasters. Some examples of output products relevant to operational forecast applications are illustrated. They include the display of each member of the ensemble, ensemble mean and dispersion fields, “clustering” of similar forecasts, and simple probability estimates.
While this implementation of ensemble prediction is relatively modest (ensembles of 14 members for the forecasts encompassing days 1 through 10), it does provide the basis for development of operational experience with ensemble forecasting, and for research directed toward maximizing the utility of NMC's numerical guidance.
Abstract
Ensemble forecasting has been operational at NCEP (formerly the National Meteorological Center) since December 1992. In March 1994, more ensemble forecast members were added. In the new configuration, 17 forecasts with the NCEP global model are run every day, out to 16-day lead time. Beyond the 3 control forecasts (a T126 and a T62 resolution control at 0000 UTC and a T126 control at 1200 UTC), 14 perturbed forecasts are made at the reduced T62 resolution. Global products from the ensemble forecasts are available from NCEP via anonymous FTP.
The initial perturbation vectors are derived from seven independent breeding cycles, where the fast-growing nonlinear perturbations grow freely, apart from the periodic rescaling that keeps their magnitude compatible with the estimated uncertainty within the control analysis. The breeding process is an integral part of the extended-range forecasts, and the generation of the initial perturbations for the ensemble is done at no computational cost beyond that of running the forecasts.
A number of graphical forecast products derived from the ensemble are available to the users, including forecasters at the Hydrometeorological Prediction Center and the Climate Prediction Center of NCEP. The products include the ensemble and cluster means, standard deviations, and probabilities of different events. One of the most widely used products is the “spaghetti” diagram where a single map contains all 17 ensemble forecasts, as depicted by a selected contour level of a field, for example, 5520 m at 500-hPa height or 50 m s−1 windspeed at the jet level.
With the aid of the above graphical displays and also by objective verification, the authors have established that the ensemble can provide valuable information for both the short and extended range. In particular, the ensemble can indicate potential problems with the high-resolution control that occurs on rare occasions in the short range. Most of the time, the “cloud” of the ensemble encompasses the verification, thus providing a set of alternate possible scenarios beyond that of the control. Moreover, the ensemble provides a more consistent outlook for the future. While consecutive control forecasts verifying on a particular date may often display large “jumps” from one day to the next, the ensemble changes much less, and its envelope of solutions typically remains unchanged. In addition, the ensemble extends the practical limit of weather forecasting by about a day. For example, significant new weather systems (blocking, extratropical cyclones, etc.) are usually detected by some ensemble members a day earlier than by the high-resolution control. Similarly, the ensemble mean improves forecast skill by a day or more in the medium to extended range, with respect to the skill of the control. The ensemble is also useful in pointing out areas and times where the spread within the ensemble is high and consequently low skill can be expected and, conversely, those cases in which forecasters can make a confident extended-range forecast because the low ensemble spread indicates high predictability. Another possible application of the ensemble is identifying potential model errors. A case of low ensemble spread with all forecasts verifying poorly may be an indication of model bias. The advantage of the ensemble approach is that it can potentially indicate a systematic bias even for a single case, while studies using only a control forecast need to average many cases.
Abstract
Ensemble forecasting has been operational at NCEP (formerly the National Meteorological Center) since December 1992. In March 1994, more ensemble forecast members were added. In the new configuration, 17 forecasts with the NCEP global model are run every day, out to 16-day lead time. Beyond the 3 control forecasts (a T126 and a T62 resolution control at 0000 UTC and a T126 control at 1200 UTC), 14 perturbed forecasts are made at the reduced T62 resolution. Global products from the ensemble forecasts are available from NCEP via anonymous FTP.
The initial perturbation vectors are derived from seven independent breeding cycles, where the fast-growing nonlinear perturbations grow freely, apart from the periodic rescaling that keeps their magnitude compatible with the estimated uncertainty within the control analysis. The breeding process is an integral part of the extended-range forecasts, and the generation of the initial perturbations for the ensemble is done at no computational cost beyond that of running the forecasts.
A number of graphical forecast products derived from the ensemble are available to the users, including forecasters at the Hydrometeorological Prediction Center and the Climate Prediction Center of NCEP. The products include the ensemble and cluster means, standard deviations, and probabilities of different events. One of the most widely used products is the “spaghetti” diagram where a single map contains all 17 ensemble forecasts, as depicted by a selected contour level of a field, for example, 5520 m at 500-hPa height or 50 m s−1 windspeed at the jet level.
With the aid of the above graphical displays and also by objective verification, the authors have established that the ensemble can provide valuable information for both the short and extended range. In particular, the ensemble can indicate potential problems with the high-resolution control that occurs on rare occasions in the short range. Most of the time, the “cloud” of the ensemble encompasses the verification, thus providing a set of alternate possible scenarios beyond that of the control. Moreover, the ensemble provides a more consistent outlook for the future. While consecutive control forecasts verifying on a particular date may often display large “jumps” from one day to the next, the ensemble changes much less, and its envelope of solutions typically remains unchanged. In addition, the ensemble extends the practical limit of weather forecasting by about a day. For example, significant new weather systems (blocking, extratropical cyclones, etc.) are usually detected by some ensemble members a day earlier than by the high-resolution control. Similarly, the ensemble mean improves forecast skill by a day or more in the medium to extended range, with respect to the skill of the control. The ensemble is also useful in pointing out areas and times where the spread within the ensemble is high and consequently low skill can be expected and, conversely, those cases in which forecasters can make a confident extended-range forecast because the low ensemble spread indicates high predictability. Another possible application of the ensemble is identifying potential model errors. A case of low ensemble spread with all forecasts verifying poorly may be an indication of model bias. The advantage of the ensemble approach is that it can potentially indicate a systematic bias even for a single case, while studies using only a control forecast need to average many cases.
Abstract
Numerical forecasts from a pilot program on short-range ensemble forecasting at the National Centers for Environmental Prediction are examined. The ensemble consists of 10 forecasts made using the 80-km Eta Model and 5 forecasts from the regional spectral model. Results indicate that the accuracy of the ensemble mean is comparable to that from the 29-km Meso Eta Model for both mandatory level data and the 36-h forecast cyclone position. Calculations of spread indicate that at 36 and 48 h the spread from initial conditions created using the breeding of growing modes technique is larger than the spread from initial conditions created using different analyses. However, the accuracy of the forecast cyclone position from these two initialization techniques is nearly identical. Results further indicate that using two different numerical models assists in increasing the ensemble spread significantly.
There is little correlation between the spread in the ensemble members and the accuracy of the ensemble mean for the prediction of cyclone location. Since information on forecast uncertainty is needed in many applications, and is one of the reasons to use an ensemble approach, the lack of a correlation between spread and forecast uncertainty presents a challenge to the production of short-range ensemble forecasts.
Even though the ensemble dispersion is not found to be an indication of forecast uncertainty, significant spread can occur within the forecasts over a relatively short time period. Examples are shown to illustrate how small uncertainties in the model initial conditions can lead to large differences in numerical forecasts from an identical numerical model.
Abstract
Numerical forecasts from a pilot program on short-range ensemble forecasting at the National Centers for Environmental Prediction are examined. The ensemble consists of 10 forecasts made using the 80-km Eta Model and 5 forecasts from the regional spectral model. Results indicate that the accuracy of the ensemble mean is comparable to that from the 29-km Meso Eta Model for both mandatory level data and the 36-h forecast cyclone position. Calculations of spread indicate that at 36 and 48 h the spread from initial conditions created using the breeding of growing modes technique is larger than the spread from initial conditions created using different analyses. However, the accuracy of the forecast cyclone position from these two initialization techniques is nearly identical. Results further indicate that using two different numerical models assists in increasing the ensemble spread significantly.
There is little correlation between the spread in the ensemble members and the accuracy of the ensemble mean for the prediction of cyclone location. Since information on forecast uncertainty is needed in many applications, and is one of the reasons to use an ensemble approach, the lack of a correlation between spread and forecast uncertainty presents a challenge to the production of short-range ensemble forecasts.
Even though the ensemble dispersion is not found to be an indication of forecast uncertainty, significant spread can occur within the forecasts over a relatively short time period. Examples are shown to illustrate how small uncertainties in the model initial conditions can lead to large differences in numerical forecasts from an identical numerical model.
Abstract
Early results are presented of an experimental program in Dynamical Extended Range Forecasting at the National Meteorological Center. The primary objective of this program is to assess the feasibility of extending operational numerical weather prediction beyond the medium range to the monthly outlook problem. Additionally, the extended integrations provide greater insight into systematic errors and climate drift and thereby feedback to model development. In this paper the principal focus is upon assessment of a contiguous set of 108 thirty-day integrations generated with the then operational Medium Range Forecast model from initial conditions 24 hours apart between 14 December 1986 and 31 March 1987.
Results indicate some serious model deficiencies such as the tendency for zonalization, i.e., systematically stronger midlatitude zonal flow than observed, and a stratospheric cold bias, which continues to grow through the 30--day integrations.
In the 1–30 day mean Northern Hemisphere 500 mb height fields the dynamical model is almost always more skillful than persistence. Most of this skill, however, is concentrated in the earlier time ranges so that on average the best estimate of the 30-day mean circulation is not the forecast 30-day mean, but the average of only the first 7–10 days. Beyond 10 days the average skill is low, but the variability in skill is large with many individual cases of skillful predictions.
We consider the problem of enhancing the forecast skill by statistical postprocessing, including time averaging Legged Average Forecasting (LAF), correction of systematic errors and Empirical Orthogonal Function filtering. A main finding is that these procedures separately or in combination, can significantly enhance the skill of already skillful predictions but do not have a significant effect on poor forecasts.
Four potential predictors of skill have been examined. Forecast agreement, the degree of consistency between members of LAF ensembles, explains on average about 10% of the regional skill variance, and forecast persistence an additional 5%. The magnitude of the forecast anomaly has virtually no relationship with skill except for small anomalies for which the skill also becomes very small. lie Pacific North American (PNA) teleconnection index of the initial circulation regime is an extremely good indicator of forecast skill at midranges where the correlation between the PNA index and skill reaches 0.77.
Finally, a major finding is that a large component of the variability in forecast results from the inability to predict the evolution of blocking events beyond a few days into the forecast. We also found a relationship between blocking episodes and the antecedent PNA index, but whether this relationship is more than coincidental has not yet been established.
Abstract
Early results are presented of an experimental program in Dynamical Extended Range Forecasting at the National Meteorological Center. The primary objective of this program is to assess the feasibility of extending operational numerical weather prediction beyond the medium range to the monthly outlook problem. Additionally, the extended integrations provide greater insight into systematic errors and climate drift and thereby feedback to model development. In this paper the principal focus is upon assessment of a contiguous set of 108 thirty-day integrations generated with the then operational Medium Range Forecast model from initial conditions 24 hours apart between 14 December 1986 and 31 March 1987.
Results indicate some serious model deficiencies such as the tendency for zonalization, i.e., systematically stronger midlatitude zonal flow than observed, and a stratospheric cold bias, which continues to grow through the 30--day integrations.
In the 1–30 day mean Northern Hemisphere 500 mb height fields the dynamical model is almost always more skillful than persistence. Most of this skill, however, is concentrated in the earlier time ranges so that on average the best estimate of the 30-day mean circulation is not the forecast 30-day mean, but the average of only the first 7–10 days. Beyond 10 days the average skill is low, but the variability in skill is large with many individual cases of skillful predictions.
We consider the problem of enhancing the forecast skill by statistical postprocessing, including time averaging Legged Average Forecasting (LAF), correction of systematic errors and Empirical Orthogonal Function filtering. A main finding is that these procedures separately or in combination, can significantly enhance the skill of already skillful predictions but do not have a significant effect on poor forecasts.
Four potential predictors of skill have been examined. Forecast agreement, the degree of consistency between members of LAF ensembles, explains on average about 10% of the regional skill variance, and forecast persistence an additional 5%. The magnitude of the forecast anomaly has virtually no relationship with skill except for small anomalies for which the skill also becomes very small. lie Pacific North American (PNA) teleconnection index of the initial circulation regime is an extremely good indicator of forecast skill at midranges where the correlation between the PNA index and skill reaches 0.77.
Finally, a major finding is that a large component of the variability in forecast results from the inability to predict the evolution of blocking events beyond a few days into the forecast. We also found a relationship between blocking episodes and the antecedent PNA index, but whether this relationship is more than coincidental has not yet been established.