• Bougeault, P., and Coauthors, 2010: The THORPEX Interactive Grand Global Ensemble. Bull. Amer. Meteor. Soc., 91, 10591072, doi:10.1175/2010BAMS2853.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DeMaria, M., and Coauthors, 2013: Improvements to the operational tropical cyclone wind speed probability model. Wea. Forecasting, 28, 586602, doi:10.1175/WAF-D-12-00116.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DeMaria, M., C. Sampson, J. Knaff, and K. Musgrave, 2014: Is tropical cyclone intensity guidance improving? Bull. Amer. Meteor. Soc., 95, 387398, doi:10.1175/BAMS-D-12-00240.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doyle, J. D., and Coauthors, 2014: Tropical cyclone prediction using COAMPS-TC. Oceanography, 27, 104115, doi:10.5670/oceanog.2014.72.

  • Ebert, E., U. Damrath, W. Wergen, and M. Baldwin, 2003: The WGNE assessment of short-term quantitative precipitation forecasts. Bull. Amer. Meteor. Soc., 84, 481492, doi:10.1175/BAMS-84-4-481.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Elliott, G., and M. Yamaguchi, 2014: Advances in forecasting motion. Eighth Int. Workshop on Tropical Cyclones, Jeju, South Korea, World Meteorological Organization, 1. [Available online at www.wmo.int/pages/prog/arep/wwrp/new/documents/Topic1_AdvancesinForecastingMotion.pdf.]

  • Elsberry, R. L., and L. E. Carr III, 2000: Consensus of dynamical tropical cyclone track forecasts—Errors versus spread. Mon. Wea. Rev., 128, 41314138, doi:10.1175/1520-0493(2000)129<4131:CODTCT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gall, R., J. Franklin, F. Marks, E. N. Rappaport, and F. Toepfer, 2013: The Hurricane Forecast Improvement Project. Bull. Amer. Meteor. Soc., 94, 329343, doi:10.1175/BAMS-D-12-00071.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Goerss, J. S., 2000: Tropical cyclone track forecasts using an ensemble of dynamical models. Mon. Wea. Rev., 128, 11871193, doi:10.1175/1520-0493(2000)128<1187:TCTFUA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Goerss, J. S., 2007: Prediction of consensus tropical cyclone track forecast error. Mon. Wea. Rev., 135, 19851993, doi:10.1175/MWR3390.1.

  • Hogan, T. F., and Coauthors, 2014: The Navy Global Environmental Model. Oceanography, 27, 116125, doi:10.5670/oceanog.2014.73.

  • Ishida, J., 2016: WGNE intercomparison of tropical cyclone track forecast. 31st Session of the Working Group on Numerical Experimentation, Pretoria, South Africa, World Meteorological Organization, 1–16. [Available online at www.wmo.int/pages/prog/arep/wwrp/new/documents/2_TC_verification.pdf.]

  • Ito, K., 2016: Errors in tropical cyclone intensity forecast by RSMC Tokyo and statistical correction using environmental parameters. SOLA, 12, 247252, doi:10.2151/sola.2016-049.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Muroi, C., and N. Sato, 1994: Intercomparison of tropical cyclone track forecast by ECMWF, UKMO and JMA operational global models. JMA Numerical Prediction Division Tech. Rep. 31, 26 pp.

  • Rappaport, E. N., and Coauthors, 2009: Advances and challenges at the National Hurricane Center. Wea. Forecasting, 24, 395419, doi:10.1175/2008WAF2222128.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sampson, C. R., J. A. Knaff, and E. M. Fukada, 2007: Operational evaluation of a selective consensus in the western North Pacific basin. Wea. Forecasting, 22, 671675, doi:10.1175/WAF991.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sandu, I., P. Bechtold, A. Beljaars, A. Bozzo, F. Pithan, T. G. Shepherd, and A. Zadra, 2016: Impacts of parameterized orographic drag on the Northern Hemisphere winter circulation. J. Adv. Model. Earth Syst., 8, 196211, doi:10.1002/2015MS000564.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Swinbank, R., and Coauthors, 2016: The TIGGE Project and its achievements. Bull. Amer. Meteor. Soc., 97, 4967, doi:10.1175/BAMS-D-13-00191.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tomassini, L., P. R. Field, R. Honnert, S. Malardel, R. McTaggart-Cowan, K. Saitou, A. T. Noda, and A. Seifert, 2017: The “grey zone” cold air outbreak global model intercomparison: A cross evaluation using large-eddy simulations. J. Adv. Model. Earth Syst., 9, 3964, doi:10.1002/2016MS000822.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tsuyuki, T., R. Sakai, and H. Mino, 2002: The WGNE intercomparison of typhoon track forecasts from operational global models for 1991–2000. WMO Bull., 51, 253257.

    • Search Google Scholar
    • Export Citation
  • Walters, D., and Coauthors, 2017: The Met Office Unified Model Global Atmosphere 6.0/6.1 and JULES Global Land 6.0/6.1 configurations. Geosci. Model Dev., 10, 14871520, doi:10.5194/gmd-10-1487-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • WCRP, 1993: Report of the eighth session of the CAS/JSC Working Group on Numerical Experimentation. WMO/TD-549, 41 pp.

  • Yamaguchi, M., R. Sakai, M. Kyoda, T. Komori, and T. Kadowaki, 2009: Typhoon Ensemble Prediction System developed at the Japan Meteorological Agency. Mon. Wea. Rev., 137, 25922604, doi:10.1175/2009MWR2697.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, F., and Y. Weng, 2015: Predicting hurricane intensity and associated hazards: A five-year real-time forecast experiment with assimilation of airborne Doppler radar observations. Bull. Amer. Meteor. Soc., 96, 2533, doi:10.1175/BAMS-D-13-00231.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhou, X., Y. Zhu, D. Hou, Y. Luo, J. Peng, and R. Wobus, 2017: Performance of the new NCEP Global Ensemble Forecast System in a parallel experiment. Wea. Forecasting, doi:10.1175/WAF-D-17-0023.1, in press.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • View in gallery

    TC basins used in the WGNE intercomparison.

  • View in gallery

    Time series of annual-average TC position errors (km) of 3-day forecasts at each verifying TC basin.

  • View in gallery

    The number of verification samples corresponding to the verification in Fig. 2.

  • View in gallery

    Time series of annual-average TC position errors (km) of 3-day forecasts over (a) the globe, (b) the Northern Hemisphere, and (c) the Southern Hemisphere.

  • View in gallery

    TC position errors (km; left vertical axis) up to 5 days by the ECMWF, NCEP, and UKMO and the consensus of the three centers at each verifying TC basin (lines). The verification period is 3 years, from 2012 to 2014. The black dots show the number of samples (right vertical axis).

  • View in gallery

    Scatterplots of 3-day TC intensity forecasts at the WNP (closed circle) and NAT (open circle) basins, respectively. The verification period is 3 years, from 2012 to 2014. The x and y axes show analyzed (i.e., best track) and forecast minimum sea level pressure (hPa), respectively.

  • View in gallery

    Example of a 5-day TC track forecast (up to 3 days for FRA) for (a) Typhoon Halong initiated at 1200 UTC 30 Jul 2014, (b) Hurricane Joaquin initiated at 1200 UTC 29 Sep 2015, and (c) Tropical Cyclone Winston initiated at 1200 UTC 16 Feb 2016. The black line shows the best track, and the colored lines show the forecasts by the global models.

  • View in gallery

    Box-and-whisker plots of TC position errors of 3-day forecasts of each NWP center. The verification period is 3 years, from 2012 to 2014, and the verifying TCs include all TCs over the globe. The red point is the mean value; the box indicates the 25th and 75th percentiles of the error distribution. The top and bottom whiskers indicate the largest and smallest values, respectively.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 174 174 19
PDF Downloads 122 122 12

WGNE Intercomparison of Tropical Cyclone Forecasts by Operational NWP Models: A Quarter Century and Beyond

View More View Less
  • 1 Meteorological Research Institute, Japan Meteorological Agency, Tsukuba, Japan
  • 2 Japan Meteorological Agency, Tokyo, Japan
  • 3 Meteorological Research Institute, Japan Meteorological Agency, Tsukuba, Japan
© Get Permissions
Open access

Abstract

Tropical cyclone (TC) track forecasts of operational numerical weather prediction (NWP) models have been compared and verified by the Japan Meteorological Agency (JMA) under an intercomparison project of the Working Group on Numerical Experimentation (WGNE) since 1991. This intercomparison has promoted validation of the global models in the tropics and subtropics. The results have demonstrated a steady increase in the global models’ ability to predict TC positions over the past quarter century.

The intercomparison study started from verification for TCs in the western North Pacific basin with three global models. Up to the present date, the verification has been extended to all ocean basins where TCs regularly occur, and 12 global models participated in the project. In recent years, the project has been extended to include verification of intensity forecasts and forecasts by regional models.

This intercomparison project has seen a significant improvement in TC track forecasts, both globally and in each TC basin. In the western North Pacific, for example, we have succeeded in obtaining an approximately 2.5-day lead-time improvement. The project has also demonstrated the benefits of multicenter track forecasts (i.e., consensus forecasts). Finally, the paper considers future challenges to TC track forecasting by NWP models that have been identified at the World Meteorological Organization’s (WMO’s) Eighth International Workshop on Tropical Cyclones (IWTC-8). We discuss the priorities and key issues in further improving the accuracy of TC track forecasts, reducing cases of large position errors, and enhancing the use of ensemble information.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

CORRESPONDING AUTHOR: Munehiko Yamaguchi, myamagu@mri-jma.go.jp

A supplement to this article is available online (10.1175/BAMS-D-16-0133.2).

Abstract

Tropical cyclone (TC) track forecasts of operational numerical weather prediction (NWP) models have been compared and verified by the Japan Meteorological Agency (JMA) under an intercomparison project of the Working Group on Numerical Experimentation (WGNE) since 1991. This intercomparison has promoted validation of the global models in the tropics and subtropics. The results have demonstrated a steady increase in the global models’ ability to predict TC positions over the past quarter century.

The intercomparison study started from verification for TCs in the western North Pacific basin with three global models. Up to the present date, the verification has been extended to all ocean basins where TCs regularly occur, and 12 global models participated in the project. In recent years, the project has been extended to include verification of intensity forecasts and forecasts by regional models.

This intercomparison project has seen a significant improvement in TC track forecasts, both globally and in each TC basin. In the western North Pacific, for example, we have succeeded in obtaining an approximately 2.5-day lead-time improvement. The project has also demonstrated the benefits of multicenter track forecasts (i.e., consensus forecasts). Finally, the paper considers future challenges to TC track forecasting by NWP models that have been identified at the World Meteorological Organization’s (WMO’s) Eighth International Workshop on Tropical Cyclones (IWTC-8). We discuss the priorities and key issues in further improving the accuracy of TC track forecasts, reducing cases of large position errors, and enhancing the use of ensemble information.

© 2017 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

CORRESPONDING AUTHOR: Munehiko Yamaguchi, myamagu@mri-jma.go.jp

A supplement to this article is available online (10.1175/BAMS-D-16-0133.2).

Tropical cyclone track forecasts by operational numerical weather prediction models have greatly improved over the last quarter century, but there still exist challenges such as forecast busts and the enhanced use of ensembles.

The Working Group on Numerical Experimentation (WGNE; www.wmo.int/pages/prog/arep/wwrp/rescrosscut/resdept_wgne.html), jointly established by the World Climate Research Programme (WCRP) Joint Scientific Committee (JSC) and the World Meteorological Organization (WMO) Commission for Atmospheric Sciences (CAS), has the responsibility of fostering the development of atmospheric circulation models for use in weather, climate, water, and environmental prediction on all time scales and diagnosing and resolving shortcomings. The WGNE has promoted various international research projects including the Drag project (Sandu et al. 2016), the Grey Zone project (Tomassini et al. 2017), and intercomparison of precipitation forecasts by operational global models (Ebert et al. 2003).

At the eighth session of the WGNE in 1992, the Japan Meteorological Agency (JMA) presented verification results of tropical cyclone (TC) track forecasts of operational global models by three numerical weather prediction (NWP) centers: JMA, the European Centre for Medium-Range Weather Forecasts (ECMWF), and the Met Office (UKMO; Muroi and Sato 1994). The WGNE recognized the importance of such a model validation study, especially for understanding the performance of the global models in the tropics and subtropics, and therefore encouraged the continuation of the verification. Since then, JMA has been reporting the latest verification results at every WGNE session (WCRP 1993; Tsuyuki et al. 2002; Ishida 2016).

The first verification was for TCs generated over the western North Pacific (WNP) basin in 1991 with the above three NWP global models. As the project has continued, the verification regions have been extended to cover all TC basins over the world, and currently, 12 global models are participating in the intercomparison. The verification components include the annual-average TC position errors, the systematic biases in the north–south and east–west as well as the along- and cross-track directions, and the systematic errors common to all/most global models (e.g., forecast busts), which are difficult to identify by verification done at each center. In addition, the TC intensity forecasts and TC position errors by regional models have also come to be verified under the project.

As the WGNE marks 25 years since the inauguration of this intercomparison, it is an opportune time to review the achievements of the project and how much the accuracy of TC track forecasts by the global models has improved in each TC basin and over the globe. The history of the project and methodology of the verification are presented in the second section. The third section describes the verification results, including the time series of annual average position errors, the impact of consensus forecasts, and the scatter diagrams of intensity forecasts. The fourth section discusses some challenges in TC track forecasting by referring to reports and recommendations at the Eighth International Workshop on Tropical Cyclones (IWTC-8) in 2014, and the fifth section summarizes the overall achievements and future directions of the project.

HISTORY AND METHODOLOGY.

Participating numerical weather prediction centers.

The list of NWP centers participating in the intercomparison is shown in Table 1. Table 1 also shows years of participation, resolutions of the global models, and horizontal resolutions of the data provided from each NWP center for the intercomparison. JMA, ECMWF, and UKMO are the original three NWP centers. The Canadian Meteorological Centre (CMC) joined in 1994, the Deutscher Wetterdienst (DWD) in 2000, the National Centers for Environmental Prediction (NCEP) and the Bureau of Meteorology (BoM) of Australia in 2003, the Météo-France (FRA) and the China Meteorological Administration (CMA) in 2004, the U.S. Naval Research Laboratory and Centro de Previsão de Tempo e Estudos Climáticos (CPTEC) in 2006, and the Korea Meteorological Administration (KMA) in 2010. Note that CPTEC participated in 2006 only, that KMA did not participate in 2013, and that CMA provided TC tracking data instead of forecast fields in 2004 and 2006. As of 2016, 12 global models participated in the intercomparison.

Table 1.

NWP centers participating in the WGNE intercomparison, years of participation, resolutions of the global models (T, TL, and L indicate the spectral triangular truncation, the linear grid, and vertical layers, respectively), horizontal resolutions of the data provided (longitude × latitude), and web links or documents on the configuration of the system, if available.

Table 1.

Verifying TC basins.

The intercomparison was launched in 1991 with verification for TCs in the WNP basin. Verification for TCs in the North Atlantic (NAT) basin started in 1999, the eastern North Pacific (ENP) basin in 2000, and the central Pacific (CPC) and the northern Indian Ocean (NIO) basins in 2002. In 2004, the Australian (AUR) and the southern Indian Ocean (SIO) basins were also added to the verification basins, and thereby, verifying TC basins came to cover all major TC basins over the globe. As of 2016, TCs are verified at each TC basin shown in Fig. 1. Note that TCs in ENP and CPC are verified together as the number of TCs in the CPC basin is small.

Fig. 1.
Fig. 1.

TC basins used in the WGNE intercomparison.

Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0133.1

For verification, we use best-track position and intensity reported by each Regional Specialized Meteorological Center (RSMC) and Tropical Cyclone Warning Center (TCWC).

Tracking method.

Each participating NWP center provides gridded dataset of the 6-hourly mean sea level pressure field. The horizontal resolution of the gridded dataset differs from one NWP center to another (see Table 1 for more details). A minimum pressure location in the mean sea level pressure field is defined as the central position of a TC. A surface-fitting technique is employed so that the central position is not necessarily on a grid point of the mean sea level pressure fields provided. The initial TC central position is searched and defined within a 500-km radius from an analyzed central position of the TC, which is based on the best-track data. The TC central position at time T + 6 h is searched and defined within a 500-km radius from the initial TC central position. After this, the TC central position is searched and defined within a 500-km radius from the point that is determined by linearly extrapolating the last two positions. The TC tracking ends when appropriate minimum pressure locations do not exist.

Verifying TCs.

Only TCs that had maximum sustained winds of 34 knots (1 kt = 0.51 m s−1) or stronger during their lifetimes are verified in this project. For determining the verification time, even if the intensity of a TC was classified in the best track as a “tropical depression” (a maximum sustained wind is below 34 kt) at the initial time of a forecast, both the initial time and later forecasts were included in the verification. Also, even if a TC declined in intensity to tropical depression status during the forecast period, the forecast was verified as long as the TC was analyzed as a tropical depression. Forecast times when the verifying TC still exists in the forecast but not in reality, or vice versa, are not included in the verification.

In the Northern Hemisphere, the TC season starts from January and ends in December. So the verification in YYYY in the WNP, NAT, ENP, and NIO basins is conducted for TCs generated from 1 January to 31 December YYYY. In the Southern Hemisphere, on the other hand, the TC season starts from September and ends in August. For example, the TC season in YYYY is from 1 September YYYY to 31 August YYYY. Thus, the verification in YYYY in the AUR and SIO basins is conducted for TCs generated during this period.

VERIFICATION RESULTS.

Time series of position errors.

TC position errors are evaluated up to 5 days at each verifying TC basin every year. Figure 2 shows the time series of the annual-average TC position errors of 3-day forecasts at each TC basin. Note that the number of verification samples varies among the NWP centers as the ability to maintain TCs in the models differs from one NWP center to another (Fig. 3). As the verification samples are not homogenous, there is a limitation of comparing one system to another for any particular year or basin. Although it is difficult to see the reduction in the TC position errors where the annual number of TCs is small, such as in the NIO basin, we can see a decreasing trend in the errors in other basins. Such a decreasing trend is also seen in other forecast times (not shown).

Fig. 2.
Fig. 2.

Time series of annual-average TC position errors (km) of 3-day forecasts at each verifying TC basin.

Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0133.1

Fig. 3.
Fig. 3.

The number of verification samples corresponding to the verification in Fig. 2.

Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0133.1

The WNP basin is the original verification area in the intercomparison and where the annual-average number of TCs is the largest among the basins over the world. According to the verification results of the ECMWF global NWP system, for example, the 3-yr running mean of the position errors of 5-day forecasts in 2014 (385 km, the average of 2012–14) falls between those of the 2- and 3-day forecasts in 1993 (331 and 435 km, the average of 1991–93), indicating that we have succeeded in obtaining approximately a 2.5-day lead-time improvement of deterministic TC track forecasts over the 22 years. The annual average reduction in the errors of 1–5-day forecasts based on a linear regression analysis from 1991 to 2014 are 6.4, 10.0, 13.5, 15.9, and 17.5 km yr−1, respectively. These correspond to the annual improvement rates of 2.8%, 2.7%, 2.6%, 2.4%, and 2.1% yr−1, respectively. These numbers are based on the average of the verification results from the original three NWP centers: ECMWF, JMA, and UKMO.

The uncertainty of the average position errors differs among the basins. For example, a significant test for 3-day forecasts with a confidence level of 95% shows that the error bar averaged among all participating NWP models from 2012 to 2014 in the WNP, NAT, ENP, NIO, AUR, and SIO is ±40, ±78, ±55, ±125, ±70, and ±61 km, respectively. The error bars in TC basins where the annual TC number is small, such as the NIO basin, are relatively large.

Verification in the globe and hemispheres.

As has been mentioned in the “Verifying TC basins” section, the verifying TC basins came to cover all TC basins over the world from 2004. Figures 4a–c show the time series of the annual position errors of 3-day forecasts for all TCs over the globe, the Northern and Southern Hemispheres, respectively. Only NWP centers that were in the project as of 2004 are included in the verification. In general, the decreasing trend in the errors is seen not only in the global and hemisphere perspectives but also in each basin. For the verification in the whole globe, for example, the annual-average reduction in the errors of 1–5-day forecasts based on a linear regression analysis from 2004 to 2014 are 6.1, 8.8, 11.6, 13.9, and 14.5 km yr−1, respectively. These correspond to the annual improvement rates of 4.2%, 3.8%, 3.6%, 3.2%, and 2.6% yr−1, respectively. These numbers are based on the average of the verification results from three NWP centers with relatively small position errors in recent years: ECMWF, NCEP, and UKMO. It would be worth mentioning that the superiority of the ECMWF forecasts is more notably seen in the Southern Hemisphere than in the Northern Hemisphere.

Fig. 4.
Fig. 4.

Time series of annual-average TC position errors (km) of 3-day forecasts over (a) the globe, (b) the Northern Hemisphere, and (c) the Southern Hemisphere.

Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0133.1

Consensus forecasts.

A mean of TC forecast positions from multiple deterministic NWP models, which is known as a consensus forecast (e.g., Goerss 2000), is widely used for operational track forecasting. Figure 5 shows the TC position errors of consensus forecasts (all models weighted equally) at each verifying TC basin. To secure a large-enough sample size, the verification period is 3 years from 2012 to 2014. As inclusion of NWP models with relatively large position errors would deteriorate the performance of the consensus forecasts (e.g., Elsberry and Carr 2000), the consensus is constructed from the three NWP centers with the smallest position errors in recent years (ECMWF, NCEP, and UKMO). For a fair comparison, the number of verification samples are set to be the same among the three NWP centers, which means that only forecast times when TCs are tracked by all three NWP centers are verified (i.e., homogenous verification). In general, the consensus forecasts have smaller position errors with respect to the track forecasts of the best single model, except for the SIO basin. For the WNP basin, for example, the improvement rates of the consensus forecasts with respect to the best single model-based forecasts are 6.0%, 9.9%, 14.2%, 17.3%, and 17.9% at 1–5 days, respectively, and these numbers tend to be larger with increasing forecast times. Though the improvement rates are not as large as those in the WNP basin, the benefit of the consensus forecasts are clearly seen in the NAT, ENP, NIO, and AUR basins, respectively. For the SIO basin, on the other hand, we cannot see the reduction in the errors by the consensus forecasts. This would be due to the fact that the difference of the position errors of the best three NWP centers is large rather than comparable with each other and thus limits the impact of a consensus approach (Sampson et al. 2007).

Fig. 5.
Fig. 5.

TC position errors (km; left vertical axis) up to 5 days by the ECMWF, NCEP, and UKMO and the consensus of the three centers at each verifying TC basin (lines). The verification period is 3 years, from 2012 to 2014. The black dots show the number of samples (right vertical axis).

Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0133.1

Intensity forecasts.

While the accuracy of TC track forecasts has steadily improved in general, as have been seen in Figs. 2 and 4, forecasting TC intensity accuracy is still a great challenge for both TC research and forecasting communities (e.g., Rappaport et al. 2009; Ito 2016). According to recent studies, however, some progress in intensity forecasts of TCs has been reported. For example, DeMaria et al. (2014) demonstrated that the best available intensity forecast guidance [e.g., Statistical Hurricane Intensity Prediction Scheme (SHIPS)] has shown considerable advances over the last 24 years. Other examples include the development of regional NWP systems using high-resolution regional models, such as the Coupled Ocean–Atmosphere Mesoscale Prediction System for Tropical Cyclones (COAMPS-TC; Doyle et al. 2014) and the Hurricane Weather Research and Forecasting Model (HWRF; Gall et al. 2013), and the assimilation of aircraft observations such as airborne Doppler radar in and around TCs (e.g., Zhang and Weng 2015).

As the resolutions of the global NWP models increase, the ability for them to forecast strong TCs is improving. Thus, we started verifying TC intensity forecasts by the operational global NWP systems under the project in addition to the track forecasts. Figure 6 are scatterplots verifying 3-day forecasts of minimum sea level pressure of TCs generated from 2012 to 2014 in the WNP and NAT basins. Note that the horizontal resolutions of the mean sea level pressure fields provided for the intercomparison are different from one NWP center to another. It would be safe to say that, in general, all the NWP models tend to underestimate the strength of TC intensity when the minimum sea level pressure of TCs in reality is approximately 940 hPa or below. Meanwhile, forecast cases where the forecast intensity is stronger than the best-track analysis are more frequently seen in ECMWF, JMA, and NRL than others. It would be worth mentioning that the underestimate of the strength of strong TCs is seen even at the initial time of the forecasts, which implies that the resolution of the models used in the data assimilation and forecast needs to be further enhanced along with development of advanced data assimilation techniques and the increase in the quantity and quality of observational data in and around TCs.

Fig. 6.
Fig. 6.

Scatterplots of 3-day TC intensity forecasts at the WNP (closed circle) and NAT (open circle) basins, respectively. The verification period is 3 years, from 2012 to 2014. The x and y axes show analyzed (i.e., best track) and forecast minimum sea level pressure (hPa), respectively.

Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0133.1

FRA, JMA, KMA, and NCEP also provide the forecast fields of their regional models: Aire Limitée Adaptation Dynamique Développement International (ALADIN), Meso-Scale Model (MSM), Regional Data Assimilation and Prediction System (RDAPS), and HWRF, respectively. In general, the accuracy of the track forecasts are comparable to that of their global models though the performance seems to be sensitive to a choice of the lateral boundary conditions, but for the intensity forecasts, the root-mean-square errors of the minimum sea level pressure forecasts tend to be smaller in the regional models (not shown).

DISCUSSION.

In this section, we will discuss three future possible directions on the verification of TC track forecasts based on the reports and recommendations at the IWTC-8 in 2014.

One of the recommendations at the IWTC-8 addressed to both operational centers and the research community is to focus on model performance for the most difficult forecast cases and explore the predictability of these events. Figures 7a–c show forecast cases where all/most models have large errors simultaneously [as the data for the 2015 and 2016 intercomparison have yet to be exchanged, TC tracking data for Hurricane Joaquin and Tropical Cyclone Winston are created using The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE; Bougeault et al. 2010; Swinbank et al. 2016)]. Figure 7a is for Typhoon Halong initiated at 1200 UTC 30 July 2014, where all the global NWP models failed in the track forecast and showed a similar poleward bias except for UKMO. Figure 7b is for Hurricane Joaquin initiated at 1200 UTC 29 September 2015, where all the models failed to predict the northeastward movement of the hurricane and showed a possibility of landfall over the East Coast of the United States. Figure 7c is a case of South Pacific Tropical Cyclone Winston initiated at 1200 UTC 16 February 2016. Most of the models failed to predict the westward movement of the tropical cyclone and predicted recurvature toward the southeast. Although the annual-average forecast errors have been decreasing, as have been seen in Figs. 2 and 4, we still have forecast bust cases like Figs. 7a–c. Clearly, the cause of such a large error should be explored, and the NWP system should be improved accordingly. Figure 8 shows a box plot of position errors of 3-day forecasts for TCs over the globe from 2012 to 2014 and the mean error of each NWP center. The error distributions reveal that the values of the mean error are larger than the median value for all NWP centers, and the tails (with large errors) extend very far from the mean or the median. Such a distribution implies that, while the mean error is decreasing, there still exist many cases in which the errors are extremely large. In other words, there is still a potential to further reduce the annual-average TC position errors by reducing the number of such large-error cases. More information can be found online (http://dx.doi.org/10.1175/BAMS-D-16-0133.2).

Fig. 7.
Fig. 7.

Example of a 5-day TC track forecast (up to 3 days for FRA) for (a) Typhoon Halong initiated at 1200 UTC 30 Jul 2014, (b) Hurricane Joaquin initiated at 1200 UTC 29 Sep 2015, and (c) Tropical Cyclone Winston initiated at 1200 UTC 16 Feb 2016. The black line shows the best track, and the colored lines show the forecasts by the global models.

Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0133.1

Fig. 8.
Fig. 8.

Box-and-whisker plots of TC position errors of 3-day forecasts of each NWP center. The verification period is 3 years, from 2012 to 2014, and the verifying TCs include all TCs over the globe. The red point is the mean value; the box indicates the 25th and 75th percentiles of the error distribution. The top and bottom whiskers indicate the largest and smallest values, respectively.

Citation: Bulletin of the American Meteorological Society 98, 11; 10.1175/BAMS-D-16-0133.1

Second, exploring and assessing the impact-based, TC-related warnings rather than simply verifying the errors of TC positions will become more important. Of course, the fact that the accuracy of TC track forecasts has been improving is more than welcome, but the disasters associated with TCs result from damaging winds, heavy rainfall, storm surge, and so on. All of these impacts depend strongly on the track, so we can take advantage of the improvement in the track forecasts to develop and evaluate the impact-based, TC-related warnings.

This might be beyond the scope of the WGNE verification study, but the enhanced use of ensemble forecasts is of great importance in TC track forecasting. As reported at the IWTC-8 (Elliott and Yamaguchi 2014), issuance of TC track forecasts up to 5 days is a standard at the RSMCs and TCWCs over the world (Table 2). When issuing the track forecasts, all the RSMCs and TCWCs are found to have a track forecast uncertainty display with a format of a cone or circle. However, the use of the spread of the multiple dynamical model guidance or from ensembles is limited, and the size of the uncertainty cone or circle is determined based on the track forecast errors averaged over the previous years in many cases. In the NIO basin, for example, a cone of uncertainty based on the track forecast errors averaged over the past 5 years would result in 60% of the observed tracks being within the cone, but the suitability of such a cone varies with different track types (Elliott and Yamaguchi 2014). For straight tracks, 90% of the cyclonic disturbances would lie within the cone, but only 39% of the recurving or looping tracks would be within the cone. Thus, a situation-dependent track forecast confidence display would be clearly more appropriate (e.g., Goerss 2007; Yamaguchi et al. 2009; DeMaria et al. 2013). Since the track forecast is the most important factor in preparing warnings that accurately communicate the threat to the community, assessing the TC track forecast uncertainty as well as proposing more effective methods of communicating the uncertainty to the public would be of great importance and would be one of the challenges to be explored in this WGNE intercomparison study.

Table 2.

Dates that various RSMCs or TCWCs extended their TC track forecasts from 3 to 5 days (third column) and their track forecast uncertainty display methodology [ensemble prediction system (EPS)].

Table 2.

SUMMARY.

TC track forecasts by operational NWP models have been evaluated in a consistent manner since 1991 under the JSC–CAS WGNE. This quarter-century-long effort is invaluable to evaluate the progress of the operational global NWP models as well as their performance over the tropics and subtropics. Moreover, this intercomparison project has helped foster the improvement in the models through identifying shortcomings in the TC track forecasts such as systematic biases and forecast bust cases.

This WGNE intercomparison clearly shows that TC track forecasts by operational global models have significantly improved over the last quarter century. This improvement can be seen in the verification not only in both hemispheres and the globe but also in each basin. In the WNP basin, for example, we have succeeded in obtaining an approximately 2.5-day lead-time improvement over the last 22 years from 1993 to 2014. In addition, given skillful track guidance from multiple NWP centers, the combination of these tracks (i.e., a consensus track) is generally more skillful over the season than any of the individual NWP center tracks. In contrast to track forecasts, challenges still remain in forecasting TC intensity by global models, though resolution in some of the models has increased to the point that they are capable of representing very strong TCs.

For further improvement in track forecasting, diagnosing and resolving systematic biases seen in most forecast models and reduction of forecast bust cases will be important, as recommended at the WMO IWTC-8. From an operational forecasting perspective, the use of ensemble techniques to provide flow-dependent uncertainty information associated with TC tracks should be enhanced as the need for such situation-dependent track forecast information continues to increase. In addition, we should focus more on exploring and assessing the impact-based, TC-related warnings and make the most use of the improved track forecasts for mitigating and preventing disasters, including damaging winds, heavy rainfall, and storm surge.

ACKNOWLEDGMENTS

The JMA thanks the WGNE and each participating NWP center for the constant support for the intercomparison. The JMA also thanks each participating NWP center for providing the forecast data, RSMCs and TCWCs for providing the best-track data, and TIGGE for providing the global ensemble dataset, respectively. The authors thank Dr. Carolyn Reynolds at the Naval Research Laboratory, Dr. Keith Williams and Dr. Julian Heming at the Met Office in the United Kingdom, Dr. Beth Ebert and Dr. Noel Davidson at the Bureau of Meteorology in Australia, Dr. Grant Elliott at Woodside Energy Ltd., and Dr. Thierry Dupont at Météo-France for providing valuable comments and helping us improve the paper. Dr. Yamaguchi is partly supported by Japan Society for the Promotion of Science KAKENHI Grant 26282111.

REFERENCES

  • Bougeault, P., and Coauthors, 2010: The THORPEX Interactive Grand Global Ensemble. Bull. Amer. Meteor. Soc., 91, 10591072, doi:10.1175/2010BAMS2853.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DeMaria, M., and Coauthors, 2013: Improvements to the operational tropical cyclone wind speed probability model. Wea. Forecasting, 28, 586602, doi:10.1175/WAF-D-12-00116.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • DeMaria, M., C. Sampson, J. Knaff, and K. Musgrave, 2014: Is tropical cyclone intensity guidance improving? Bull. Amer. Meteor. Soc., 95, 387398, doi:10.1175/BAMS-D-12-00240.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Doyle, J. D., and Coauthors, 2014: Tropical cyclone prediction using COAMPS-TC. Oceanography, 27, 104115, doi:10.5670/oceanog.2014.72.

  • Ebert, E., U. Damrath, W. Wergen, and M. Baldwin, 2003: The WGNE assessment of short-term quantitative precipitation forecasts. Bull. Amer. Meteor. Soc., 84, 481492, doi:10.1175/BAMS-84-4-481.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Elliott, G., and M. Yamaguchi, 2014: Advances in forecasting motion. Eighth Int. Workshop on Tropical Cyclones, Jeju, South Korea, World Meteorological Organization, 1. [Available online at www.wmo.int/pages/prog/arep/wwrp/new/documents/Topic1_AdvancesinForecastingMotion.pdf.]

  • Elsberry, R. L., and L. E. Carr III, 2000: Consensus of dynamical tropical cyclone track forecasts—Errors versus spread. Mon. Wea. Rev., 128, 41314138, doi:10.1175/1520-0493(2000)129<4131:CODTCT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gall, R., J. Franklin, F. Marks, E. N. Rappaport, and F. Toepfer, 2013: The Hurricane Forecast Improvement Project. Bull. Amer. Meteor. Soc., 94, 329343, doi:10.1175/BAMS-D-12-00071.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Goerss, J. S., 2000: Tropical cyclone track forecasts using an ensemble of dynamical models. Mon. Wea. Rev., 128, 11871193, doi:10.1175/1520-0493(2000)128<1187:TCTFUA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Goerss, J. S., 2007: Prediction of consensus tropical cyclone track forecast error. Mon. Wea. Rev., 135, 19851993, doi:10.1175/MWR3390.1.

  • Hogan, T. F., and Coauthors, 2014: The Navy Global Environmental Model. Oceanography, 27, 116125, doi:10.5670/oceanog.2014.73.

  • Ishida, J., 2016: WGNE intercomparison of tropical cyclone track forecast. 31st Session of the Working Group on Numerical Experimentation, Pretoria, South Africa, World Meteorological Organization, 1–16. [Available online at www.wmo.int/pages/prog/arep/wwrp/new/documents/2_TC_verification.pdf.]

  • Ito, K., 2016: Errors in tropical cyclone intensity forecast by RSMC Tokyo and statistical correction using environmental parameters. SOLA, 12, 247252, doi:10.2151/sola.2016-049.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Muroi, C., and N. Sato, 1994: Intercomparison of tropical cyclone track forecast by ECMWF, UKMO and JMA operational global models. JMA Numerical Prediction Division Tech. Rep. 31, 26 pp.

  • Rappaport, E. N., and Coauthors, 2009: Advances and challenges at the National Hurricane Center. Wea. Forecasting, 24, 395419, doi:10.1175/2008WAF2222128.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sampson, C. R., J. A. Knaff, and E. M. Fukada, 2007: Operational evaluation of a selective consensus in the western North Pacific basin. Wea. Forecasting, 22, 671675, doi:10.1175/WAF991.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sandu, I., P. Bechtold, A. Beljaars, A. Bozzo, F. Pithan, T. G. Shepherd, and A. Zadra, 2016: Impacts of parameterized orographic drag on the Northern Hemisphere winter circulation. J. Adv. Model. Earth Syst., 8, 196211, doi:10.1002/2015MS000564.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Swinbank, R., and Coauthors, 2016: The TIGGE Project and its achievements. Bull. Amer. Meteor. Soc., 97, 4967, doi:10.1175/BAMS-D-13-00191.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tomassini, L., P. R. Field, R. Honnert, S. Malardel, R. McTaggart-Cowan, K. Saitou, A. T. Noda, and A. Seifert, 2017: The “grey zone” cold air outbreak global model intercomparison: A cross evaluation using large-eddy simulations. J. Adv. Model. Earth Syst., 9, 3964, doi:10.1002/2016MS000822.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Tsuyuki, T., R. Sakai, and H. Mino, 2002: The WGNE intercomparison of typhoon track forecasts from operational global models for 1991–2000. WMO Bull., 51, 253257.

    • Search Google Scholar
    • Export Citation
  • Walters, D., and Coauthors, 2017: The Met Office Unified Model Global Atmosphere 6.0/6.1 and JULES Global Land 6.0/6.1 configurations. Geosci. Model Dev., 10, 14871520, doi:10.5194/gmd-10-1487-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • WCRP, 1993: Report of the eighth session of the CAS/JSC Working Group on Numerical Experimentation. WMO/TD-549, 41 pp.

  • Yamaguchi, M., R. Sakai, M. Kyoda, T. Komori, and T. Kadowaki, 2009: Typhoon Ensemble Prediction System developed at the Japan Meteorological Agency. Mon. Wea. Rev., 137, 25922604, doi:10.1175/2009MWR2697.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhang, F., and Y. Weng, 2015: Predicting hurricane intensity and associated hazards: A five-year real-time forecast experiment with assimilation of airborne Doppler radar observations. Bull. Amer. Meteor. Soc., 96, 2533, doi:10.1175/BAMS-D-13-00231.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Zhou, X., Y. Zhu, D. Hou, Y. Luo, J. Peng, and R. Wobus, 2017: Performance of the new NCEP Global Ensemble Forecast System in a parallel experiment. Wea. Forecasting, doi:10.1175/WAF-D-17-0023.1, in press.

    • Crossref
    • Search Google Scholar
    • Export Citation

Supplementary Materials

Save