Search Results
You are looking at 1 - 10 of 27 items for
- Author or Editor: Roberto Buizza x
- Refine by Access: All Content x
Abstract
The 51-member TL399L62 ECMWF ensemble prediction system (EPS51) is compared with a lagged ensemble system based on the six most recent ECMWF TL799L91 forecasts (LAG6). The EPS51 and LAG6 systems are compared to two 6-member ensembles with a “weighted” ensemble-mean: EPS6wEM and LAG6wEM. EPS6wEM includes six members of EPS51 and has the ensemble mean constructed giving optimal weights to its members, while LAG6wEM includes the LAG6 six members and has the ensemble mean constructed giving optimal weights to its members. In these weighted ensembles, the optimal weights are based on 50-day forecast error statistics of each individual member (in EPS51 and LAG6 the ensemble mean is constructed giving the same weight to each individual member). The EPS51, LAG6, EPS6wEM, and LAG6wEM ensembles are compared for a 7-month period (from 1 April to 30 October 2006—213 cases) and for two of the most severe storms that hit the Scandinavian countries since 1969.
The study shows that EPS51 has the best-tuned ensemble spread, and provides the best probabilistic forecasts, with differences in predictability between EPS51 and LAG6 or LAG6wEM probabilistic forecasts of geopotential height anomalies of up to 24 h. In terms of ensemble mean, EPS51 gives the best forecast from forecast day 4, but before forecast day 4 LAG6wEM provides a slightly better forecast, with differences in predictability smaller than 2 h up to forecast day 6, and of about 6 h afterward. The comparison also shows that a larger ensemble size is more important in the medium range rather than in the short range.
Overall, these results indicate that if the aim of ensemble prediction is to generate not only a single (most likely) scenario but also a probabilistic forecast, than EPS51 has a higher skill than the lagged ensemble system based on LAG6 or LAG6wEM.
Abstract
The 51-member TL399L62 ECMWF ensemble prediction system (EPS51) is compared with a lagged ensemble system based on the six most recent ECMWF TL799L91 forecasts (LAG6). The EPS51 and LAG6 systems are compared to two 6-member ensembles with a “weighted” ensemble-mean: EPS6wEM and LAG6wEM. EPS6wEM includes six members of EPS51 and has the ensemble mean constructed giving optimal weights to its members, while LAG6wEM includes the LAG6 six members and has the ensemble mean constructed giving optimal weights to its members. In these weighted ensembles, the optimal weights are based on 50-day forecast error statistics of each individual member (in EPS51 and LAG6 the ensemble mean is constructed giving the same weight to each individual member). The EPS51, LAG6, EPS6wEM, and LAG6wEM ensembles are compared for a 7-month period (from 1 April to 30 October 2006—213 cases) and for two of the most severe storms that hit the Scandinavian countries since 1969.
The study shows that EPS51 has the best-tuned ensemble spread, and provides the best probabilistic forecasts, with differences in predictability between EPS51 and LAG6 or LAG6wEM probabilistic forecasts of geopotential height anomalies of up to 24 h. In terms of ensemble mean, EPS51 gives the best forecast from forecast day 4, but before forecast day 4 LAG6wEM provides a slightly better forecast, with differences in predictability smaller than 2 h up to forecast day 6, and of about 6 h afterward. The comparison also shows that a larger ensemble size is more important in the medium range rather than in the short range.
Overall, these results indicate that if the aim of ensemble prediction is to generate not only a single (most likely) scenario but also a probabilistic forecast, than EPS51 has a higher skill than the lagged ensemble system based on LAG6 or LAG6wEM.
Abstract
It is shown that a numerical weather prediction system with variable resolution, higher in the early forecast range and lower afterward, provides more skilful forecasts than a system with constant resolution. Results indicate that the advantage can be detected also beyond the time when the resolution is truncated (truncation time). Forecasts generated with a T399 spectral truncation up to forecast day 3 and a T255 truncation from day 3 to day 8 (VAR3) are compared with forecasts generated with a constant T319 truncation. First, forecasts are verified in an idealized model error (IME) scenario against higher resolution, T799 simulations. In this scenario, VAR3 outperforms the T319 system beyond the day-3 truncation time for the entire 8-day forecast range, with differences statistically significant at the 5% level. Second, forecasts are verified in a realistic scenario against T799 analyses. In this case, although the advantage of VAR3 can still be detected beyond day 3, it is less evident and not statistically significant. Forecast error spectra indicate that using a higher-resolution model during the first forecast days improves the forecasts of the large scales, thus helping to maintain the advantage of the variable resolution system beyond the truncation time. VAR3 and T319 ensembles are also compared with forecasts with a T255, T399, and T799 constant resolution. The predictability “gain” of all ensemble configurations is measured with respect to the reference constant T255 configuration. Results show that, in the realistic scenario, VAR3 gives gains 50%–75% higher than T319 and 50%–75% lower than T799.
Abstract
It is shown that a numerical weather prediction system with variable resolution, higher in the early forecast range and lower afterward, provides more skilful forecasts than a system with constant resolution. Results indicate that the advantage can be detected also beyond the time when the resolution is truncated (truncation time). Forecasts generated with a T399 spectral truncation up to forecast day 3 and a T255 truncation from day 3 to day 8 (VAR3) are compared with forecasts generated with a constant T319 truncation. First, forecasts are verified in an idealized model error (IME) scenario against higher resolution, T799 simulations. In this scenario, VAR3 outperforms the T319 system beyond the day-3 truncation time for the entire 8-day forecast range, with differences statistically significant at the 5% level. Second, forecasts are verified in a realistic scenario against T799 analyses. In this case, although the advantage of VAR3 can still be detected beyond day 3, it is less evident and not statistically significant. Forecast error spectra indicate that using a higher-resolution model during the first forecast days improves the forecasts of the large scales, thus helping to maintain the advantage of the variable resolution system beyond the truncation time. VAR3 and T319 ensembles are also compared with forecasts with a T255, T399, and T799 constant resolution. The predictability “gain” of all ensemble configurations is measured with respect to the reference constant T255 configuration. Results show that, in the realistic scenario, VAR3 gives gains 50%–75% higher than T319 and 50%–75% lower than T799.
Abstract
The accuracy and the potential economic value of categorical and probabilistic forecasts of discrete events are discussed. Accuracy is assessed applying known measures of forecast accuracy, and the potential economic value is measured by a weighted difference between the system probability of detection and the probability of false detection, with weights function of the cost–loss ratio and the observed ratio and the observed relative frequency of the event.
Results obtained using synthetic forecast and observed fields document the sensitivity of accuracy measures and of the potential forecast economic value to imposed random and systematic errors. It is shown that forecast skill cannot be defined per se but depends on the measure used to assess it: forecasts judged to be skillful according to one measure can show no skill according to another measures. More generally, it is concluded that the design of a forecasting system should follow the definition of its purposes, and should be such that the ensemble system maximizes its performance as assessed by the accuracy measures that best quantify the achievement of its purposes.
Results also indicate that independently from the model error (random or systematic) ensemble-based probabilistic forecasts exhibit higher potential economic values than categorical forecasts.
Abstract
The accuracy and the potential economic value of categorical and probabilistic forecasts of discrete events are discussed. Accuracy is assessed applying known measures of forecast accuracy, and the potential economic value is measured by a weighted difference between the system probability of detection and the probability of false detection, with weights function of the cost–loss ratio and the observed ratio and the observed relative frequency of the event.
Results obtained using synthetic forecast and observed fields document the sensitivity of accuracy measures and of the potential forecast economic value to imposed random and systematic errors. It is shown that forecast skill cannot be defined per se but depends on the measure used to assess it: forecasts judged to be skillful according to one measure can show no skill according to another measures. More generally, it is concluded that the design of a forecasting system should follow the definition of its purposes, and should be such that the ensemble system maximizes its performance as assessed by the accuracy measures that best quantify the achievement of its purposes.
Results also indicate that independently from the model error (random or systematic) ensemble-based probabilistic forecasts exhibit higher potential economic values than categorical forecasts.
Abstract
Ensemble forecasting is a feasible method to integrate a deterministic forecast with an estimate of the probability distribution of atmospheric states. At the European Centre for Medium-Range Weather Forecasts (ECMWF), the Ensemble Prediction System (EPS) comprises 32 perturbed and 1 unperturbed nonlinear integrations, at T63 spectral triangular truncation and with 19 vertical levels. The perturbed initial conditions are generated using the most unstable directions growing over a 48-h time period, computed at T42L19 resolution.
This work describes the performance of the ECMWF EPS during the first 21 months of daily operation, from 1 May 1994 to 31 January 1996, focusing on the 500-hPa geopotential height fields.
First, the EPS is described, and the validation approach followed throughout this work is discussed. In particular, spread and skill distribution functions are introduced to define a more integral validation methodology for ensemble prediction.
Then, the potential forecast skill of ensemble prediction is estimated considering one ensemble member as verification (perfect ensemble assumption). In particular, the ratio between ensemble spread and control error is computed, and the potential correlation between ensemble spread and control forecast skill is evaluated. The results obtained within the perfect ensemble hypothesis give estimates of the limits of forecast skill to be expected for the ECMWF EPS.
Finally, the EPS is validated against analysis fields, and the EPS skill is compared with the skill of the perfect ensemble. Results indicate that the EPS spread is smaller than the distance between the control forecast and the analysis. Considering ensemble spread–control skill scatter diagrams, a so-called faulty index is introduced to estimate the percentage of wrongly predicted cases with small spread/high control skill. Results suggest that there is some correspondence between small ensemble spread and high control skill. Considering the 500-hPa geopotential height field over the Northern Hemisphere at forecast day 7, approximately 20% (45%) of the perturbed ensemble members have anomaly correlation skill higher than 0.6 during warm (cold) seasons, respectively. The percentage of analysis values lying outside the EPS forecast range is thought to be still too high.
Abstract
Ensemble forecasting is a feasible method to integrate a deterministic forecast with an estimate of the probability distribution of atmospheric states. At the European Centre for Medium-Range Weather Forecasts (ECMWF), the Ensemble Prediction System (EPS) comprises 32 perturbed and 1 unperturbed nonlinear integrations, at T63 spectral triangular truncation and with 19 vertical levels. The perturbed initial conditions are generated using the most unstable directions growing over a 48-h time period, computed at T42L19 resolution.
This work describes the performance of the ECMWF EPS during the first 21 months of daily operation, from 1 May 1994 to 31 January 1996, focusing on the 500-hPa geopotential height fields.
First, the EPS is described, and the validation approach followed throughout this work is discussed. In particular, spread and skill distribution functions are introduced to define a more integral validation methodology for ensemble prediction.
Then, the potential forecast skill of ensemble prediction is estimated considering one ensemble member as verification (perfect ensemble assumption). In particular, the ratio between ensemble spread and control error is computed, and the potential correlation between ensemble spread and control forecast skill is evaluated. The results obtained within the perfect ensemble hypothesis give estimates of the limits of forecast skill to be expected for the ECMWF EPS.
Finally, the EPS is validated against analysis fields, and the EPS skill is compared with the skill of the perfect ensemble. Results indicate that the EPS spread is smaller than the distance between the control forecast and the analysis. Considering ensemble spread–control skill scatter diagrams, a so-called faulty index is introduced to estimate the percentage of wrongly predicted cases with small spread/high control skill. Results suggest that there is some correspondence between small ensemble spread and high control skill. Considering the 500-hPa geopotential height field over the Northern Hemisphere at forecast day 7, approximately 20% (45%) of the perturbed ensemble members have anomaly correlation skill higher than 0.6 during warm (cold) seasons, respectively. The percentage of analysis values lying outside the EPS forecast range is thought to be still too high.
Abstract
The (linear) time evolution of singular vectors computed with a primitive equation model following a 36-h evolving trajectory is analyzed at horizontal triangular spectral truncations T21, T42, and T63.
First, for each resolution, the impact of horizontal diffusion on the singular vectors characteristics (amplification factors, total energy spectra) is analyzed. Forecast error and singular vectors computed with different horizontal diffusion damping times are compared to assess whether, at each resolution, forecast error projection onto the first 10 most unstable singular vectors is maximized for specific values. Results suggest that better projections are obtained with horizontal diffusion damping times on the smallest scale (on divergence) of 3 h at T42 and T63 resolution, and of 12 h at T21.
Then amplification factors, geographical locations, total energy vertical distributions, and spectra of T21, T42, and T63 singular vectors computed, respectively, with 12-, 3-, and 3-h damping time on the smallest scale are analyzed. The ratio among the singular vector amplification factors at T21:T42:T63 resolution is shown to be approximately 1:1.5:2.5. The geographical location and the total energy vertical distribution of T21, T42, and T63 singular vectors are quite similar. By contrast, total energy spectra differ substantially. Forecast error projection onto singular vectors is shown to be slightly larger if higher-resolution singular vectors are used. It is argued that the impact of horizontal resolution on the forecast error projection is marginal because of the lack of physical processes in the forward and adjoint tangent model versions. Moreover, the fact that forecast error projections onto the leading 10 singular vectors are rather small could be seen as an indication that more singular vectors are needed to capture the growing components of forecast error.
Finally, singular vectors and forecast errors are compared to quantify the relevance of the singular vectors of day d to capture the growing features of the error of the forecast started on day d. Results indicate that forecast error projection onto the leading 10 singular vectors decreases if singular vectors of a wrong date are used.
Abstract
The (linear) time evolution of singular vectors computed with a primitive equation model following a 36-h evolving trajectory is analyzed at horizontal triangular spectral truncations T21, T42, and T63.
First, for each resolution, the impact of horizontal diffusion on the singular vectors characteristics (amplification factors, total energy spectra) is analyzed. Forecast error and singular vectors computed with different horizontal diffusion damping times are compared to assess whether, at each resolution, forecast error projection onto the first 10 most unstable singular vectors is maximized for specific values. Results suggest that better projections are obtained with horizontal diffusion damping times on the smallest scale (on divergence) of 3 h at T42 and T63 resolution, and of 12 h at T21.
Then amplification factors, geographical locations, total energy vertical distributions, and spectra of T21, T42, and T63 singular vectors computed, respectively, with 12-, 3-, and 3-h damping time on the smallest scale are analyzed. The ratio among the singular vector amplification factors at T21:T42:T63 resolution is shown to be approximately 1:1.5:2.5. The geographical location and the total energy vertical distribution of T21, T42, and T63 singular vectors are quite similar. By contrast, total energy spectra differ substantially. Forecast error projection onto singular vectors is shown to be slightly larger if higher-resolution singular vectors are used. It is argued that the impact of horizontal resolution on the forecast error projection is marginal because of the lack of physical processes in the forward and adjoint tangent model versions. Moreover, the fact that forecast error projections onto the leading 10 singular vectors are rather small could be seen as an indication that more singular vectors are needed to capture the growing components of forecast error.
Finally, singular vectors and forecast errors are compared to quantify the relevance of the singular vectors of day d to capture the growing features of the error of the forecast started on day d. Results indicate that forecast error projection onto the leading 10 singular vectors decreases if singular vectors of a wrong date are used.
Abstract
The influence of topography on fluid instability has been studied in literature both in the beta-channel approximation and on the sphere mainly using normal modes. A different approach recently proposed is based on the identification of unstable singular vectors (i.e., structures that have the fastest growth over finite-time intervals). Systems characterized by neutral or damped normal modes have been shown to have singular vectors growing (e.g., in terms of kinetic energy) over finite-time intervals. Singular vectors do not conserve their shape during time evolution as normal modes do. Various aspects related to the identification of singular vectors of a barotropic flow are analyzed in this paper, with the final goal of studying the impact of the orography on these structures.
First, the author focuses on very idealized situations to verify if neutral and damped flows can sustain structures growing over finite-time intervals. Then, the author studies singular vectors of basic states defined as the super-position of a superrotation and a Rossby-Haurwitz wave forced by orographies that project onto one spectral component only or forced by very simple orographies with longitudinally or latitudinally elongated shapes. This first part shows that orography can alter the unstable subspace generated by the most unstable singular vectors, either directly through the action of the orographic term in the linear equation or indirectly by modifying the evolution of the basic state.
In the second part, the author considers a realistic basic state, defined as a mean winter flow computed from 3 months of observed vorticity field, forced by a realistic orography. It is shown that the orographic forcing can indirectly modify the singular vector structures. In fact, “orographically induced” instabilities can be identified only when considering time-evolving basic states.
These results show that unstable structures related to physical processes can be captured by the adjoint technique.
Abstract
The influence of topography on fluid instability has been studied in literature both in the beta-channel approximation and on the sphere mainly using normal modes. A different approach recently proposed is based on the identification of unstable singular vectors (i.e., structures that have the fastest growth over finite-time intervals). Systems characterized by neutral or damped normal modes have been shown to have singular vectors growing (e.g., in terms of kinetic energy) over finite-time intervals. Singular vectors do not conserve their shape during time evolution as normal modes do. Various aspects related to the identification of singular vectors of a barotropic flow are analyzed in this paper, with the final goal of studying the impact of the orography on these structures.
First, the author focuses on very idealized situations to verify if neutral and damped flows can sustain structures growing over finite-time intervals. Then, the author studies singular vectors of basic states defined as the super-position of a superrotation and a Rossby-Haurwitz wave forced by orographies that project onto one spectral component only or forced by very simple orographies with longitudinally or latitudinally elongated shapes. This first part shows that orography can alter the unstable subspace generated by the most unstable singular vectors, either directly through the action of the orographic term in the linear equation or indirectly by modifying the evolution of the basic state.
In the second part, the author considers a realistic basic state, defined as a mean winter flow computed from 3 months of observed vorticity field, forced by a realistic orography. It is shown that the orographic forcing can indirectly modify the singular vector structures. In fact, “orographically induced” instabilities can be identified only when considering time-evolving basic states.
These results show that unstable structures related to physical processes can be captured by the adjoint technique.
Abstract
Empirical orthogonal function (EOF) analysis of deviations from the ensemble mean was used to validate the statistical properties of TL159 51-member ensemble forecasts run at the European Centre for Medium-Range Weather Forecasts (ECMWF) during the winter of 1996/97. The main purpose of the analysis was to verify the agreement between the amount of spread variance and error variance accounted for by different EOFs. A suitable score was defined to quantify the agreement between the variance spectra in a given EOF subspace. The agreement between spread and error distribution for individual principal components (PCs) was also tested using the nonparametric Mann–Whitney test. The analysis was applied at day 3, 5, and 7 forecasts of 500-hPa height over Europe and North America, and of 850-hPa temperature over Europe.
The variance spectra indicate a better performance of the ECMWF Ensemble Prediction System (EPS) over Europe than over North America in the medium range. In the former area, the excess of error variance over spread variance tends to be confined to nonleading PCs, while for the first two PCs the error variance is smaller than spread at day 3 and in very close agreement at day 7. When averaged over a six-EOF subspace, the relative differences between spread and error PC variances are about 25% over Europe, with the smallest discrepancy (15%) for 850-hPa temperature at day 7. Overall, the distribution of variance between different EOFs produced by the EPS over Europe is in good agreement with the observed distribution, the differences being of comparable magnitude to the sampling errors of PC variances in individual seasons.
Abstract
Empirical orthogonal function (EOF) analysis of deviations from the ensemble mean was used to validate the statistical properties of TL159 51-member ensemble forecasts run at the European Centre for Medium-Range Weather Forecasts (ECMWF) during the winter of 1996/97. The main purpose of the analysis was to verify the agreement between the amount of spread variance and error variance accounted for by different EOFs. A suitable score was defined to quantify the agreement between the variance spectra in a given EOF subspace. The agreement between spread and error distribution for individual principal components (PCs) was also tested using the nonparametric Mann–Whitney test. The analysis was applied at day 3, 5, and 7 forecasts of 500-hPa height over Europe and North America, and of 850-hPa temperature over Europe.
The variance spectra indicate a better performance of the ECMWF Ensemble Prediction System (EPS) over Europe than over North America in the medium range. In the former area, the excess of error variance over spread variance tends to be confined to nonleading PCs, while for the first two PCs the error variance is smaller than spread at day 3 and in very close agreement at day 7. When averaged over a six-EOF subspace, the relative differences between spread and error PC variances are about 25% over Europe, with the smallest discrepancy (15%) for 850-hPa temperature at day 7. Overall, the distribution of variance between different EOFs produced by the EPS over Europe is in good agreement with the observed distribution, the differences being of comparable magnitude to the sampling errors of PC variances in individual seasons.
Abstract
Between 24 and 26 January 2000 explosive cyclogenesis along the U.S. east coast caused serious economic disruption and loss of lives. The performance of the European Centre for Medium-Range Weather Forecasts (ECMWF) high-resolution TL319 model and of the TL159 Ensemble Prediction System (EPS) in predicting the storm evolution is investigated.
The most critical time period to predict was the rapid intensification of the storm between 24 and 25 January. Single deterministic forecasts based on the TL319 model gave skillful predictions only 36 h before the event. By contrast, the EPS indicated the possibility that the storm would hit the affected region 2 days before the event, consistently enhancing the indications present in forecasts issued 3 and 4 days before the event. This suggests that the ECMWF EPS, suitably used, could be a valuable support tool for critical issues as alerts for extreme winter weather.
Sensitivity studies indicate that the interaction of initial perturbations and stochastic perturbations added to the model tendencies was a necessary ingredient to have some EPS members correctly predicting the storm.
Abstract
Between 24 and 26 January 2000 explosive cyclogenesis along the U.S. east coast caused serious economic disruption and loss of lives. The performance of the European Centre for Medium-Range Weather Forecasts (ECMWF) high-resolution TL319 model and of the TL159 Ensemble Prediction System (EPS) in predicting the storm evolution is investigated.
The most critical time period to predict was the rapid intensification of the storm between 24 and 25 January. Single deterministic forecasts based on the TL319 model gave skillful predictions only 36 h before the event. By contrast, the EPS indicated the possibility that the storm would hit the affected region 2 days before the event, consistently enhancing the indications present in forecasts issued 3 and 4 days before the event. This suggests that the ECMWF EPS, suitably used, could be a valuable support tool for critical issues as alerts for extreme winter weather.
Sensitivity studies indicate that the interaction of initial perturbations and stochastic perturbations added to the model tendencies was a necessary ingredient to have some EPS members correctly predicting the storm.