Evaluation of the Forecast Performance for Week-2 Winter Surface Air Temperature from the Model for Prediction Across Scales–Atmosphere (MPAS-A)

Wenkai Li aCollaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD)/Key Laboratory of Meteorological Disaster, Ministry of Education (KLME)/Joint International Research Laboratory of Climate and Environment Change (ILCEC), Nanjing University of Information Science and Technology, Nanjing, China
bInstitute of Weather Prediction Science and Applications, HuaFeng-NUIST, Nanjing, China

Search for other papers by Wenkai Li in
Current site
Google Scholar
PubMed
Close
,
Jinmei Song aCollaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD)/Key Laboratory of Meteorological Disaster, Ministry of Education (KLME)/Joint International Research Laboratory of Climate and Environment Change (ILCEC), Nanjing University of Information Science and Technology, Nanjing, China

Search for other papers by Jinmei Song in
Current site
Google Scholar
PubMed
Close
,
Pang-chi Hsu aCollaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD)/Key Laboratory of Meteorological Disaster, Ministry of Education (KLME)/Joint International Research Laboratory of Climate and Environment Change (ILCEC), Nanjing University of Information Science and Technology, Nanjing, China

Search for other papers by Pang-chi Hsu in
Current site
Google Scholar
PubMed
Close
, and
Yong Wang aCollaborative Innovation Center on Forecast and Evaluation of Meteorological Disasters (CIC-FEMD)/Key Laboratory of Meteorological Disaster, Ministry of Education (KLME)/Joint International Research Laboratory of Climate and Environment Change (ILCEC), Nanjing University of Information Science and Technology, Nanjing, China
bInstitute of Weather Prediction Science and Applications, HuaFeng-NUIST, Nanjing, China

Search for other papers by Yong Wang in
Current site
Google Scholar
PubMed
Close
Free access

Abstract

The forecast skill for week-2 wintertime surface air temperature (SAT) over the Northern Hemisphere by the Model for Prediction Across Scales–Atmosphere (MPAS-A) is evaluated and compared with operational forecast systems that participate in the Subseasonal to Seasonal Prediction project (S2S). An intercomparison of the MPAS against the China Meteorological Administration (CMA) model and the European Centre for Medium-Range Weather Forecasts (ECMWF) model was performed using 10-yr reforecasts. Comparing the forecast skill for SAT and atmospheric circulation anomalies at a lead of 2 weeks among the three models, the MPAS shows skill lower than the ECMWF model but higher than the CMA model. The gap in skills between the MPAS model and CMA model is not as large as that between the ECMWF model and MPAS model. Additionally, an intercomparison of the MPAS model against 10 S2S models is presented by using real-time forecasts since 2016 stored in the S2S database. The results show that the MPAS model has forecast skill for week-2 to week-4 wintertime SAT comparable to that in most S2S models. The MPAS model tends to be at an intermediate level compared to current operational forecast models.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Wenkai Li, wenkai@nuist.edu.cn

Abstract

The forecast skill for week-2 wintertime surface air temperature (SAT) over the Northern Hemisphere by the Model for Prediction Across Scales–Atmosphere (MPAS-A) is evaluated and compared with operational forecast systems that participate in the Subseasonal to Seasonal Prediction project (S2S). An intercomparison of the MPAS against the China Meteorological Administration (CMA) model and the European Centre for Medium-Range Weather Forecasts (ECMWF) model was performed using 10-yr reforecasts. Comparing the forecast skill for SAT and atmospheric circulation anomalies at a lead of 2 weeks among the three models, the MPAS shows skill lower than the ECMWF model but higher than the CMA model. The gap in skills between the MPAS model and CMA model is not as large as that between the ECMWF model and MPAS model. Additionally, an intercomparison of the MPAS model against 10 S2S models is presented by using real-time forecasts since 2016 stored in the S2S database. The results show that the MPAS model has forecast skill for week-2 to week-4 wintertime SAT comparable to that in most S2S models. The MPAS model tends to be at an intermediate level compared to current operational forecast models.

© 2022 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Wenkai Li, wenkai@nuist.edu.cn

1. Introduction

The subseasonal forecast, which resides between medium-range and seasonal forecasts, provides enormous benefits for decision-makers (White et al. 2017). Its products are frequently in the form of week-to-week meteorological variables with forecast lead times from 2 to 4 weeks (e.g., Vitart and Robertson 2018; de Andrade et al. 2019; Diro and Lin 2020; Li et al. 2020; Cao et al. 2021; Dutra et al. 2021; Klingaman et al. 2021; Wang et al. 2021). Subseasonal variations in wintertime near-surface air temperature (SAT) have impacts on societal and economic activities. For example, cold surges could lead to agricultural losses and be related to public health. It is of great significance to the skillful forecast of subseasonal SAT variations with sufficient lead time.

Dynamic models are important tools for subseasonal forecasts. Products from dynamical models provide valuable information on subseasonal forecasts. The improvements in numerical models contribute to increasing the quality of subseasonal forecasts. Evaluating the forecast skills of models is one of the research priorities to applying models to operational subseasonal forecasts. Clarifying the issue of the model (e.g., systematic errors, role of resolution, role of ocean–atmosphere coupling) also helps improve the ability of the model to forecast.

The Model for Prediction Across Scales (MPAS) is a collaborative project for developing atmosphere, ocean, and other Earth-system simulation components for use in weather and climate studies. MPAS–Atmosphere (MPAS-A) is a nonhydrostatic atmosphere model built within the MPAS framework. The MPAS-A is being developed by the National Center for Atmospheric Research (NCAR). Because this paper only involves the atmosphere model, MPAS-A is abbreviated as MPAS in the following. As a promising tool, the MPAS has been increasingly used to meet the demand of forecasts (e.g., Ha et al. 2017; Schwartz 2019; Hsu et al. 2021; Lui et al. 2021; Tian and Zou 2021), to conduct numerical simulations and sensitivity experiments on multiple time scales (e.g., Pilon et al. 2016; Campbell et al. 2020; Kramer et al. 2020; Imberger et al. 2021; Maoyi and Abiodun 2021), to study climate change (Michaelis et al. 2019), to model air quality (Gilliam et al. 2021), etc.

As a global numerical weather prediction model, the MPAS can be used to produce subseasonal forecasts. However, thus far, there are few verifications of such an application. This work aims to extend the MPAS model to the application of subseasonal forecasting by evaluating its performance. The motivation for the utility of the MPAS model is due to some of its advantages, e.g., flexible resolution settings, multiple physics parameterization schemes, open-source. This study assessed the forecast skill for week-2 wintertime SAT over the Northern Hemisphere by the MPAS and compared the forecast skill of the MPAS with that by operational forecast systems that participate in the S2S project (Vitart et al. 2017). We mainly focus on forecasting for week-2, as this is the first step toward the goal of applying the MPAS model to subseasonal forecasts.

The rest of the paper is organized as follows. In section 2, we describe the models and reforecasts employed, the validation data and the verification methods used. Section 3 presents the results of the evaluation of the forecast performance of SAT for the week-2 outlook, as well as the skill for atmospheric circulations. Then, in section 4, we provide some discussion to extend the results. Finally, conclusions are given in section 5.

2. Materials and methods

a. MPAS model settings

The MPAS model (version 7.1) is used for the reforecast experiments in this study. The MPAS uses the C-grid staggering method for state variables, and the entire globe is covered with an unstructured centroidal Voronoi mesh (Skamarock et al. 2012). The horizontal resolution is set as a 120-km uniform mesh globally. The model top is set at 30 km with 55 model levels.

MPAS provides suites of parameterization schemes that have been tested together. We use the default physics suite in MPAS, namely, the “mesoscale reference” suite. This suite is appropriate for horizontal resolutions greater than 10 km. The mesoscale reference suite includes the New Tiedtke convective cumulus scheme (Zhang and Wang 2017), the Weather Research and Forecasting Model single-moment 6-class microphysics (WSM6) scheme (Hong and Lim 2006), the Noah land surface model (Ek et al. 2003), Yonsei University (YSU) planetary boundary layer parameterization (Hong et al. 2006), Monin–Obukhov surface layer parameterization (Monin and Obukhov 1954), and the Rapid Radiative Transfer Model for GCM applications (RRTMG) longwave and shortwave radiation parameterizations (Iacono et al. 2008).

Reforecasts (sometimes known as hindcasts) are initialized with the National Centers for Environmental Prediction (NCEP) FNL (Final) Operational Model Global Tropospheric Analyses. The current MPAS model is an atmosphere and land only model. The sea surface temperature is from initial conditions and is kept constant during the integration.

A time-lagged-ensemble strategy is performed, which helps improve the forecast skill. Members are initialized by FNL Analyses at 6-h intervals. There are four members in one forecast. The first member is integrated for 14 days. For each additional member, the initial field time is 6 h earlier than that for the previous member, and the integral period increases by 6 h compared to the previous member. For example, the second member is initialized by FNL analyses at 6 h earlier than that for the first member and is integrated for 14 days and 6 h. Ensemble mean results are derived by averaging the four members with the same calendar target date.

b. Reforecasts

The forecast skill of the MPAS is compared with operational forecast systems that participate in the S2S project. The reforecast data are obtained from the S2S archive database. Details of the S2S database can be found in Vitart et al. (2017). The comparison here includes the China Meteorological Administration (CMA) model and the European Centre for Medium-Range Weather Forecasts (ECMWF) model. The CMA system has a horizontal resolution of T266 (approximately 45 km) and 56 vertical layers up to 0.1 hPa for the atmospheric model. The atmospheric model in the ECMWF system has a horizontal resolution of T639 (approximately 16 km) up to day 15 and T319 (approximately 32 km) after day 15 and 91 vertical layers up to 0.01 hPa. Both the CMA and ECMWF systems were run with air–sea coupled models. The ocean models in the CMA and ECMWF systems have a 0.25° horizontal resolution and 50 vertical levels and a 0.25° horizontal resolution and 75 vertical levels, respectively. More details on the model descriptions can be found at https://confluence.ecmwf.int/display/S2S/Models (last access: June 2022). These two models share a common reforecast period from July 2006 to July 2016 with a reforecast initialized frequency that is equal to or greater than once a week. The control forecast (using a single unperturbed initial condition) and the first three number of perturbed forecasts are used for each model to construct ensemble means.

The reforecasts made by MPAS in this study have the same initialization time as the selected reforecasts from the S2S database. The considered reforecasts span 10 hindcast years. Reforecasts in each hindcast year have initialization time spans from every 0000 UTC 23 July in one calendar year to 0000 UTC 15 July in the following calendar year, with an interval of 7 days between each initialization time. The forecast values are averaged for 7 consecutive days (one week), with week 1 as days 1–7 and week 2 as days 8–14. This study focuses on the week-2 outlook. Each hindcast year has 52 orecasts, with a total of 520 forecasts during the 10 hindcast years from July 2006 to July 2016. Boreal wintertime forecasts are selected according to initialization time (from November to March). Each winter has 20 forecasts (weeks), with a total of 200 forecasts during the 10 hindcast years.

The climatology of each week is the 10-yr average of reforecasts for the same week from July 2006 to July 2016. Therefore, it is the average of 40 weekly values (10 years × 4 members). Additionally, a 13-week low-pass Lanczos filter is applied on the weekly annual cycle of climatology to the smoothed weekly climatological averages. The annual cycle of climatology is calculated for grid points, each forecast lead week and each model. The forecast weekly anomaly is obtained by removing the annual cycle of climatology.

In section 3, the comparison of forecast skill is based on 10-yr reforecasts as described above. The reforecasts used for discussion in section 4 are different. Details are described in section 4.

c. Verification data and methods

The reforecasts are verified against ERA5 reanalysis data (Hersbach et al. 2020). ERA5 is a state-of-the-art global atmospheric reanalysis. ERA5 data are also processed for weekly means to compare with the reforecasts. The selected ERA5 data cover the same 10-yr period as that for reforecasts. The weekly anomalies in reanalysis are obtained by a similar method that is applied for reforecasts.

The S2S model outputs are stored in the S2S prediction project database on the common 1.5° × 1.5° regular latitude–longitude grid, no matter what the original resolution of the model outputs. All variables from the S2S database and the ERA5 reanalysis are at such S2S database horizontal spatial resolution. The original 120-km horizontal resolution of reforecasts produced by the MPAS model are interpolated into the same 1.5° × 1.5° S2S database grid.

To quantify the forecast ability of forecast models, three common statistical measures, i.e., the mean bias, the temporal correlation coefficient (TCC), and the root-mean-square error (RMSE), are calculated in this study. These three metrics are widely used for subseasonal forecast assessments (e.g., Hsu et al. 2015; Vitart 2017; Qian et al. 2020; Yan et al. 2021). The bias assesses the systematic differences between the mean forecasts and mean observations. The TCC measures the correspondence between forecasts and observations, while the RMSE measures the average magnitude of the forecast errors. The forecast skill of the SAT and atmospheric circulations are investigated through the statistical measures between forecasts in the models and those in verification data. RMSE and TCC are calculated using anomalies. To describe the relative magnitude compared to the standard deviation of each variable, the perturbation of each variable is normalized by its own standard deviation during the analysis period. The RMSE is nondimensional in this paper, and its unit is one standard deviation. The statistical measures are calculated for all grids to show their spatial patterns.

We study the area north of 20°N in the Northern Hemisphere (NH). Subregions, including East Asia (EA; within 20°–55°N, 90°–150°E), Europe (EU; within 35°–70°N, 10°W–50°E), and North America (NA; within 30°–70°N, 125°–70°W), are also a focus. Regional averaging of values of statistical measures in grids over subregions is performed. The subregions are indicated by the magenta outlines in Fig. 1. The regional averages are obtained by first calculating the statistical measures for each grid point and then calculating the weighted area average of the values in all subregion grid points. The weighted area average is performed using the weighting approach of the cosine of the latitudes.

Fig. 1.
Fig. 1.

The systematic biases of wintertime surface air temperature (SAT; K) in week-2 forecasts by the (a) MPAS, (b) CMA, and (c) ECMWF models against those in ERA5 reanalysis data. The magenta boxes indicate the subregions defined for regional averaging in Fig. 2.

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

3. Results

a. Climatological SAT bias

Before we evaluate the forecast skill of SAT anomalies, the systematic bias of SAT in models is presented first. The spatial pattern of climatological SAT bias in the models during winter at a lead of 2 weeks and its regional averages are shown in Fig. 1 and Fig. 2. The biases are derived by calculating differences in the multiyear wintertime mean SAT between forecasts and ERA5 reanalysis. The MPAS model shows an obvious cold bias over the land area north of 45°N, especially over high-latitude land areas and the Tibetan Plateau (Fig. 1a). The cold bias in the MPAS model is mostly in the boreal winter snow-covered area. This implies that snow–atmosphere coupling may contribute to such cold bias, which deserves further study. The CMA model shows a serious cold bias over high-latitude Europe and a warm bias over some of northeast Asia and the Tibetan Plateau (Fig. 1b). For the ECMWF model, cold bias mainly exists over Europe, northern Africa, and western America (Fig. 1c). ECMWF models also have a warm bias over northeast Asia. The CMA model additionally shows a large warm bias over the ocean, especially near the east coast of East Asia and the east coast of North America. Meanwhile, the biases over the ocean in the MPAS and ECMWF are mostly small.

Fig. 2.
Fig. 2.

Regional averaging of the systematic biases of wintertime SAT (K) in week-2 forecasts by the MPAS, CMA, and ECMWF models against those in the ERA5 reanalysis data over the Northern Hemisphere (NH; north of 20°N), East Asia (EA), Europe (EU), and North America (NA). The areas defined for averaging are indicated by the magenta outlines in Fig. 1.

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

For the entire Northern Hemisphere, the MPAS and ECMWF models show cold biases, while the CMA model shows warm biases (Fig. 2). There are cold biases and warm biases in the MPAS model and ECMWF model over East Asia, respectively. Such cold bias and warm bias are mainly caused by the different biases of the two models in the northern part of East Asia, as suggested in the spatial pattern of biases. The CMA model shows large warm biases over East Asia, which are mainly contributed by large warm biases over sea areas. All three models have cold biases over Europe. The cold bias is most obvious in the CMA model. The MPAS shows the smallest bias over Europe. For North America, the MPAS shows a large cold bias, which is mainly contributed by large cold biases over high-latitude North America, as suggested in the spatial pattern of biases. The CMA and ECMWF show relatively small biases.

b. Forecast skill for week-2 SAT

It is noted that climatological bias cannot be taken as the key indicator of forecast skill, as anomalies are of greater concern compared to the original values from models for subseasonal forecasts. Climatology is removed to obtain anomalies. The annual cycle of climatology is calculated for grid points, each forecast lead week and each model. The systematic bias is therefore removed after removing the climatology. RMSE and TCC are used to evaluate the forecast skill for week-2 SAT anomalies. The spatial patterns of the TCC and RMSE between the reforecast and ERA5 reanalysis are shown in Fig. 3. Here, the TCC and RMSE are based on 200 samples of wintertime forecasts. A higher TCC and lower RMSE indicate better forecast skill. A skillful forecast is generally defined as a TCC greater than 0.5. Areas with a TCC less than 0.5 are masked with a white color. The regional averages of the spatial distribution of the TCC and RMSE values are shown in scatterplots in Fig. 4.

Fig. 3.
Fig. 3.

Forecast skill of the SAT in the models during wintertime. The spatial pattern of temporal correlation coefficients (TCCs) between the week-2 forecasted SAT anomalies in the (a) MPAS, (b) CMA, and (c) ECMWF models against those in the ERA5 reanalysis data. (d)–(f) As in (a)–(c), but for the root-mean-square errors (RMSEs; standard deviations). The magenta boxes indicate the subregions defined for regional averaging in Fig. 4.

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

Fig. 4.
Fig. 4.

Regional averaging of the spatial TCCs and root-mean-square errors (RMSEs) shown in Fig. 3 over (a) the NH (north of 20°N), (b) EA, (c) EU, and (d) NA. The areas defined for averaging are indicated by the magenta outlines in Fig. 3. The x axis and y axis represent TCC and RMSE, respectively. Note that the y axis (RMSE) has been reversed. A higher TCC and lower RMSE indicate better forecast skill.

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

The spatial distributions of skill for forecast SAT are similar among models, although the skill values vary in terms of intensity among models (Fig. 3). This is related to the spatial distribution of SAT variability in different regions. The MPAS model, as well as the other two models, shows skillful forecasts over most of the land areas (TCC greater than 0.5). For each model, the forecast skill is generally higher over high-latitude Eurasia, the eastern part of the Asian continent and the eastern part of North America compared to other land areas. The models show obvious relatively low skill over the western part of North America and the western part of East Asia. The low skill over these areas may be due to the topography, which decreases the predictability of SAT. Models also poorly forecast SAT over the polar regions. Although there are some differences in the forecast skills of the models for different regions, their skill rankings are the same (Fig. 4). Comparing the three models, the MPAS shows skill lower than the ECMWF model but higher than the CMA model over the Northern Hemisphere and all three subregions. The MPAS model is more distant from the ECMWF model than it is from the CMA model.

c. Forecast skill for atmospheric circulation

Since SAT is modulated by atmospheric circulations, we further evaluated the forecast skill for atmospheric circulation. Sea level pressure (SLP), geopotential height at 500 hPa (H500), and zonal wind at 200 hPa (U200) are included as representative variables of the lower-, middle-, and upper-level troposphere, respectively. The spatial patterns of the TCC and RMSE between those variables in the reforecast and ERA5 reanalysis are shown in Figs. 5 and 6. The regional averages of forecast skills are shown in Fig. 7.

Fig. 5.
Fig. 5.

The spatial pattern of TCCs between (a)–(c) the forecasted sea level pressure (SLP), (d)–(f) geopotential height at 500 hPa (H500), and (g)–(i) zonal wind at 200 hPa (U200) in the (left) MPAS, (center) CMA, and (right) ECMWF models with a lead of 2 weeks against those in the ERA5 reanalysis data. The magenta boxes indicate the subregions defined for regional averaging in Fig. 7.

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

Fig. 6.
Fig. 6.

As in Fig. 5, but for the RMSEs (standard deviations).

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

Fig. 7.
Fig. 7.

Regional averaging of the spatial TCCs and RMSEs for (a) SLP, (b) H500, and (c) U200 over the Northern Hemisphere (north of 20°N). The x axis and y axis represent TCC and RMSE, respectively. Note that the y axis (RMSE) has been reversed. A higher TCC and lower RMSE indicate better forecast skill. The areas defined for averaging are indicated by the magenta outlines in Figs. 5 and 6.

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

In general, the polar regions have the lowest forecast skill for all variables and all models, followed by midlatitude areas (Figs. 5 and 6). Western North America and high-latitude Eurasia also have some low forecast skill. The spatial distribution of skill is similar among the models, which is likely related to the predictability of each variable that varies by region. Meanwhile, differences in forecast skills between models are mainly in intensity rather than spatial distribution. Such differences in the magnitude of skillfulness reflect performance in each model. The forecast skill in the MPAS is better than that in the CMA model but lower than that in the ECMWF model. The ECMWF model shows a larger TCC and lower RMSE values than the other models, illustrating that ECMWF forecasts have the highest quality and performance of atmospheric circulations. The TCCs of ECMWF for these three variables are almost all higher than 0.5 in the Northern Hemisphere. The TCCs of the MPAS and CMA models are below 0.5 in some areas, and the MPAS model appears to have more areas with skillful forecasts than the CMA model. Comparing the three models (Fig. 7), the forecast skills of the three models for atmospheric circulation are similar to their forecast skills for SAT. The ECMWF shows outstanding forecast skill for atmospheric circulation. The forecast skill of the MPAS model is lower than that of the ECMWF model but higher than that of the CMA model. The gap in skills between the MPAS model and CMA model is not as large as that between the ECMWF model and MPAS model. These features are valid for all three variables.

4. Discussion

a. Intercomparison with more S2S models

In section 3, we compare the MPAS model with the CMA and ECMWF models that participate in the S2S project based on 10-yr reforecasts. The S2S database also stores real-time forecasts. Using real-time forecast data in the S2S database, we can compare the MPAS model with more forecast systems. Because the accumulated real-time forecasts in the S2S project database start from 2015 (some models start from 2016), such a comparison must be based on forecasts covering shorter periods. Here, in this section, we present an intercomparison of the MPAS model against 10 S2S models. In addition to the CMA model and the ECMWF model, the intercomparison here also includes models operated by the Environment and Climate Change Canada (ECCC), the Météo-France/Centre National de Recherche Meteorologiques (CNRM), the Hydrometeorological Centre of Russia (HMCR), the Institute of Atmospheric Sciences and Climate of the National Research Council (CNR-ISAC), the Japan Meteorological Agency (JMA), the Korea Meteorological Administration (KMA), the National Centers for Environmental Prediction (NCEP), and the Met Office (UKMO). The considered forecasts span 5 hindcast years. Select forecasts in each hindcast year have initialization time on every Thursday covering November 2016–October 2021. Forecasts with the same initialization time are produced by the MPAS model. The data processing method is similar to that of the previous 10-yr comparison as described in section 2, but only control forecasts (using a single unperturbed initial condition) by each model are used.

The comparison of SAT forecast skills is shown in Fig. 8. The statistical significance of skill score differences between the MPAS model and S2S models is determined using the bootstrap method (Efron 1979). Resampling with replacement was applied from 10 000 bootstraps to estimate the probability distribution of the TCC and RMSE. The two statistical measures were calculated for each bootstrap. Then, the results are sorted. The 5% significance intervals are determined from the 5th and 95th percentiles of the values obtained by 10 000 repetitions. Such tests are used as a relative measure of the differences between the MPAS and S2S models (dashed lines in Fig. 8). The higher TCC and the lower RMSE illustrate the better forecast skill.

Fig. 8.
Fig. 8.

Regional averaging of the spatial TCCs and RMSEs over (a) the NH (north of 20°N), (b) EA, (c) EU, and (d) NA for SAT. TCCs and RMSEs are calculated using the forecasted SAT in the MPAS, 10 S2S models, and MPAS-60-km with a lead of 2 weeks against those in the ERA5 reanalysis data. The areas defined for averaging are indicated by the magenta outlines in Fig. 3. The x axis and y axis represent TCC and RMSE, respectively. Note that the y axis (RMSE) has been reversed. A higher TCC and lower RMSE indicate better forecast skill. The ranges of the axes in each plot are different. Dashed lines represent the 5% level of significance intervals calculated using a bootstrap resampling method for MPAS.

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

Comparing the MPAS and 10 S2S models, the ECMWF, ECCC, JMA, KMA, and UKMO models show higher skill than the MPAS model for SAT over the Northern Hemisphere (Fig. 8a). These models significantly outperform the MPAS model at the 5% level of significance. The MPAS model performs better than the CMA and HMCR models, and the differences in performance are also significant. Meanwhile, the MPAS model has a similar skill to the CNRM, CNR-ISAC, and NCEP models, with no significant differences. The results are similar for subregions (Figs. 8b–d). The ECMWF, ECCC, KMA, and UKMO models show better skill than the MPAS model for all subregions, while the CMA and HMCR models show lower skill than the MPAS model for all subregions, and these differences are significant. Furthermore, the NCEP model shows better skill than the MPAS model for East Asia, while the CNR-ISAC model shows lower forecast skill than MPAS in forecasts for the European region, and these differences are significant. Other than the above, the skills in the CNRM, JMA, CNR-ISAC, and NCEP models are not significantly different from that of the MPAS model. Overall, the MPAS is competitive compared to the S2S models investigated for week-2 SAT forecasting.

Many factors could influence the skills of a model, including but not limited to initial fields, atmospheric dynamical core and parameterizations of physical processes. The similar skills of different models may be due to their common characteristics. The initialization for the MPAS in this work is the NCEP FNL analysis, which is almost identical to that for the NCEP forecasts. Moreover, both the MPAS model and the NCEP model use the Noah land surface model as their land surface scheme. This may be the reason why they have similar skills.

b. Sensitivity to model resolution

One of the advantages of the MPAS model is that it allows flexible horizontal resolution settings. Users can choose an appropriate resolution according to their needs. In general, models with finer horizontal resolution have better forecasting/simulating skills (e.g., Gao et al. 2006; Hack et al. 2006; Anstey et al. 2013; Strachan et al. 2013; Weber and Mass 2019), but also require more computational resources. Here, we provide a preliminary test on the sensitivity of the forecast skill of the MPAS model to horizontal resolution settings. A 60-km horizontal resolution reforecast was performed using the MPAS model (hereafter MPAS-60-km). The MPAS-60-km has the same model settings as those in the abovementioned 120-km MPAS reforecasts (hereafter MPAS-120-km) described in section 2a but uses a horizontal 60-km uniform mesh globally.

The TCC and RMSE values of MPAS-60-km for week-2 SAT are shown in scatterplots in Fig. 8, together with MPAS-120-km runs and 10 other S2S models. Reforecasts by MPAS-60-km are more skillful than 120-km MPAS reforecasts, both for the Northern Hemisphere and all other subregions. For the Northern Hemisphere and East Asia, the improvements in performance are statistically significant. For Europe and North America, although the differences in forecast skill between MPAS-60-km and MPAS-120-km are not statistically significant, there are still some improvements in the skill of MPAS-60-km relative to MPAS-120-km. At least for the sensitivity test as we provided here, a finer horizontal resolution set in the MPAS model is beneficial for better forecasts.

c. Sensitivity to parameter schemes

The MPAS offers multiple parameterizations of physical processes. The choice and combination of different parameterization schemes might affect forecast skill (e.g., Zhao et al. 2019; Weber et al. 2020; Zhu et al. 2020). We examine the sensitivity of forecast skill for week-2 SAT to parameterization schemes. The parameterization schemes in the MPAS are grouped into these modules: convection, microphysics, boundary layer, land surface, surface layer, shortwave and longwave radiation. For the current version of the MPAS model, each of these individual physics parameterizations has two or more options, except that the land surface parameterization has only one option. Because of computational constraints, we only test a limited combination of all available parameterization schemes instead of testing all possible combinations. The configurations of the physical parameterization schemes for the sensitivity tests are shown in Table 1. Sensitivity investigations are based on the default physics suite in MPAS (i.e., mesoscale reference suite). We replace the physical parameterization schemes in this suite with other optional options one by one (from Exp-1 to Exp-7, as listed in Table 1). The convection permitting physics suite is also tested (Exp-8 as listed in Table 1).

Table 1

Configurations of the physical parameterization schemes for sensitivity tests and source citation. A blank cell indicates that the same option of parameterization scheme as Exp-default is used.

Table 1

The comparison of SAT forecast skills in sensitivity tests is shown in Fig. 9. Overall, Exp-default has the best forecast skill. Except for Exp-3, which shows higher skill than Exp-default over North America, the difference is not significant from Exp-default. Some configurations might not be applicable for subseasonal forecasting. Exp-1, Exp-2, Exp-4, and Exp-8 show significantly lower forecast skill than Exp-default in terms of overall forecast skill for the Northern Hemisphere. For subregions, Exp-2, Exp-4, Exp-5, and Exp-8 show significantly lower forecast skill than Exp-default for East Asia. Exp-4 shows significantly lower forecast skill than Exp-default for Europe and North America. Other than the above, the skill in sensitivity tests is not significantly different from Exp-default. These results suggest that the forecast skill is sensitive to physical parameterization schemes in some cases. The default mesoscale reference suite of parameterization schemes is optimal for the MPAS with 120-km uniform mesh global horizontal resolution. Note that the choice of parameterization scheme is closely related to the model resolution. The conclusions here might be limited to the resolution set in this study.

Fig. 9.
Fig. 9.

As in Fig. 8, but calculated using the forecasted SAT in the MPAS by default parameterization schemes and 10 other sensitivity test experiments, as listed in Table 1. Dashed lines represent the 5% level of significance intervals calculated using a bootstrap resampling method for Exp-default.

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

d. Forecast skill for week-3 and week-4 SAT and sensitivity to SST

To pay closer attention to the S2S prediction skill, we also examined the forecast skill for week-3 and week-4 SAT by the MPAS model (Fig. 10). All models show limited forecast skill for week-3 and week-4 SAT. Especially for the forecast skill for week-4 SAT, the TCCs of all models in all regions are less than 0.2, and the TCCs of some models are even close to zero. Comparing the MPAS and the 10 S2S models, the MPAS model is generally slightly below average. However, the difference in skill in the MPAS from that in most S2S models was not statistically significant, indicating that the forecast skill of the MPAS model for week-3 and week-4 SAT was also comparable to that of most S2S models.

Fig. 10.
Fig. 10.

As in Fig. 8, but calculated using the forecasted SAT in the MPAS and 10 S2S models with leads of (a)–(d) 3 weeks and (e)–(h) 4 weeks against those in the ERA5 reanalysis data.

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

The current MPAS model is not an air–sea coupled model that does not contain prognostic sea surface temperature (SST) as a lower boundary condition. Here, we tested the sensitivity of subseasonal forecast skill by the MPAS model to SST (MPAS-SST). The MPAS-SST runs have the same model settings as those in the MPAS reforecasts as described in section 2a, but the model integration is with daily updated SST from FNL Analyses. The MPAS-SST shows a larger TCC and lower RMSE values than the MPAS with constant SST (Fig. 11). The differences in TCC and RMSE between MPAS-SST and MPAS with constant SST were significant from week 2 to week 4. This suggests that the real-time forecasting skills of the stand-alone atmosphere and land-only MPAS model are somewhat limited due to the lack of a prognostic SST field. It is valuable to further develop an air–sea coupled version of the MPAS model. For theoretical studies using retrospective forecasts/simulations, it is better to update SST using observed analyses in the integration of the MPAS model.

Fig. 11.
Fig. 11.

Regional averaging of the (a) spatial TCCs and (b) RMSEs (standard deviations) over the Northern Hemisphere (north of 20°N), which are calculated using the forecasted SAT in the MPAS (integration with constant SST) and MPAS-SST (integration with daily updated SST) against those in the ERA5 reanalysis data. The x axis represents the forecast lead (weeks). The y axis represents TCCs or RMSEs. Error bars represent the 5% level of significance intervals calculated using a bootstrap resampling method.

Citation: Weather and Forecasting 37, 11; 10.1175/WAF-D-22-0054.1

5. Conclusions

The MPAS is a developing advanced global numerical weather prediction model with a hexagonal mesh. This work aims to extend the MPAS model to the application of subseasonal forecasting by evaluating its performance. The forecast skill for week-2 wintertime SAT over the Northern Hemisphere by the MPAS model is evaluated and compared with operational forecast systems that participate in the S2S project. An intercomparison of the MPAS against the CMA and ECMWF models was performed using 10-yr reforecasts. Comparing the forecast skill for SAT and atmospheric circulation anomalies at a lead of 2 weeks among the three models, the MPAS shows skill lower than the ECMWF model but higher than the CMA model. The gap in skills between the MPAS model and CMA model is not as large as that between the ECMWF model and MPAS model. Additionally, an intercomparison of the MPAS model against 10 S2S models is presented by using real-time forecasts since 2016 stored in the S2S database. The results show that the MPAS model has forecast skill for week-2 wintertime SAT comparable to that in most S2S models. The MPAS model tends to be at an intermediate level compared to current operational forecast models.

The sensitivities of the forecast skill for week-2 SAT to the horizontal resolution and configurations of the physical parameterization schemes of the MPAS model were examined. The results show that the choice of finer horizontal resolution and suitable parameterization schemes is beneficial to produce skillful forecasts. A preliminary verification of the performance of the MPAS model for week-3 and week-4 wintertime SAT forecasts is also provided. The results show that the forecast skill of the MPAS model for week-3 and week-4 SAT is also comparable to that of most S2S models. In addition, the MPAS model is somewhat limited for subseasonal forecasting due to the lack of a prognostic SST field.

The MPAS model provides flexible resolution settings and multiple options of physics parameterization schemes, which allow users to make appropriate settings as needed. Apart from uniform-resolution, the MPAS can act as a customized variable-resolution global model (e.g., Huang et al. 2017; Lui et al. 2020; Hsu et al. 2020). By variable-resolution in the MPAS model, regions of interest may be covered with grids of the highest possible resolution and the remaining with coarser grids and smooth transitions from high to coarse resolutions. This is worthy of further research for specific regions. The MPAS is distributed as an open-source model. Users can easily obtain, compile and utilize the MPAS model and even participate in its development. Some theoretical studies based on numerical experiments can also be achieved by using the MPAS model. For example, to understand the role of land surface processes in the predictability of subseasonal forecasts. Hopefully, this work will bring attention to the potential application of the MPAS model to subseasonal forecasting, from research to operations.

Acknowledgments.

The authors thank the three anonymous reviewers who provided helpful comments and suggestions for improving the overall quality of the paper. This research has been supported by the Natural Science Foundation of China (41905074) and the Natural Science Foundation of Jiangsu Province (BK20190782). We acknowledge the High Performance Computing Center of Nanjing University of Information Science and Technology for their support of this work.

Data availability statement.

The data and model used in this study are free to the public. The MPAS source codes can be obtained at https://mpas-dev.github.io. The NCEP FNL data are available at https://rda.ucar.edu/datasets/ds083.2/. The S2S datasets are available at https://apps.ecmwf.int/datasets/. The ERA5 reanalysis data are available at https://cds.climate.copernicus.eu/. Data processing and figure production were performed using the NCAR Command Language (NCL) version 6.6.2, https://doi.org/10.5065/d6wd3xh5.

REFERENCES

  • Anstey, J. A., and Coauthors, 2013: Multi-model analysis of Northern Hemisphere winter blocking: Model biases and the role of resolution. J. Geophys. Res. Atmos., 118, 39563971, https://doi.org/10.1002/jgrd.50231.

    • Search Google Scholar
    • Export Citation
  • Campbell, P. C., J. O. Bash, J. A. Herwehe, R. C. Gilliam, and D. Li, 2020: Impacts of tiled land cover characterization on global meteorological predictions using the MPAS-A. J. Geophys. Res. Atmos., 125, e2019JD032093, https://doi.org/10.1029/2019JD032093.

  • Cao, Q., S. Shukla, M. J. DeFlorio, F. M. Ralph, and D. P. Lettenmaier, 2021: Evaluation of the subseasonal forecast skill of floods associated with atmospheric rivers in coastal western U.S. watersheds. J. Hydrometeor., 22, 15351552, https://doi.org/10.1175/jhm-d-20-0219.1.

    • Search Google Scholar
    • Export Citation
  • Collins, W. D., and Coauthors, 2004: Description of the NCAR Community Atmosphere Model (CAM 3.0). NCAR Tech. Note NCAR/TN-464+STR, 214 pp., https://doi.org/10.5065/D63N21CH.

  • de Andrade, F. M., C. A. S. Coelho, and I. F. A. Cavalcanti, 2019: Global precipitation hindcast quality assessment of the subseasonal to seasonal (S2S) prediction project models. Climate Dyn., 52, 54515475, https://doi.org/10.1007/s00382-018-4457-z.

    • Search Google Scholar
    • Export Citation
  • Diro, G. T., and H. Lin, 2020: Subseasonal forecast skill of snow water equivalent and its link with temperature in selected SubX models. Wea. Forecasting, 35, 273284, https://doi.org/10.1175/WAF-D-19-0074.1.

    • Search Google Scholar
    • Export Citation
  • Dutra, E., F. Johannsen, and L. Magnusson, 2021: Late spring and summer subseasonal forecasts in the Northern Hemisphere midlatitudes: Biases and skill in the ECMWF model. Mon. Wea. Rev., 149, 26592671, https://doi.org/10.1175/MWR-D-20-0342.1.

    • Search Google Scholar
    • Export Citation
  • Efron, B., 1979: Bootstrap methods: Another look at the jackknife. Ann. Stat., 7, 126, https://doi.org/10.1214/aos/1176344552.

  • Ek, M. B., K. E. Mitchell, Y. Lin, E. Rogers, P. Grunmann, V. Koren, G. Gayno, and J. D. Tarpley, 2003: Implementation of Noah land surface model advances in the national centers for environmental prediction operational mesoscale Eta model. J. Geophys. Res., 108, 8851, https://doi.org/10.1029/2002JD003296.

    • Search Google Scholar
    • Export Citation
  • Gao, X., Y. Xu, Z. Zhao, J. S. Pal, and F. Giorgi, 2006: On the role of resolution and topography in the simulation of East Asia precipitation. Theor. Appl. Climatol., 86, 173185, https://doi.org/10.1007/s00704-005-0214-4.

    • Search Google Scholar
    • Export Citation
  • Gilliam, R. C., J. A. Herwehe, O. R. Bullock Jr, J. E. Pleim, L. Ran, P. C. Campbell, and H. Foroutan, 2021: Establishing the suitability of the model for prediction across scales for global retrospective air quality modeling. J. Geophys. Res. Atmos., 126, e2020JD033588, https://doi.org/10.1029/2020JD033588.

  • Grell, G. A., and S. R. Freitas, 2014: A scale and aerosol aware stochastic convective parameterization for weather and air quality modeling. Atmos. Chem. Phys., 14, 52335250, https://doi.org/10.5194/acp-14-5233-2014.

    • Search Google Scholar
    • Export Citation
  • Ha, S., C. Snyder, W. C. Skamarock, J. Anderson, and N. Collins, 2017: Ensemble Kalman filter data assimilation for the model for prediction across scales (MPAS). Mon. Wea. Rev., 145, 46734692, https://doi.org/10.1175/MWR-D-17-0145.1.

    • Search Google Scholar
    • Export Citation
  • Hack, J. J., J. M. Caron, G. Danabasoglu, K. W. Oleson, C. Bitz, and J. E. Truesdale, 2006: CCSM–CAM3 climate simulation sensitivity to changes in horizontal resolution. J. Climate, 19, 22672289, https://doi.org/10.1175/JCLI3764.1.

    • Search Google Scholar
    • Export Citation
  • Hersbach, H., and Coauthors, 2020: The ERA5 global reanalysis. Quart. J. Roy. Meteor. Soc., 146, 19992049, https://doi.org/10.1002/qj.3803.

    • Search Google Scholar
    • Export Citation
  • Hong, S.-Y., and J.-O. J. Lim, 2006: The WRF single-moment 6-class microphysics scheme (WSM6). Asia-Pac. J. Atmos. Sci., 42, 129151.

  • Hong, S.-Y., Y. Noh, and J. Dudhia, 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 23182341, https://doi.org/10.1175/MWR3199.1.

    • Search Google Scholar
    • Export Citation
  • Hsu, L.-H., L.-S. Tseng, S.-Y. Hou, B.-F. Chen, and C.-H. Sui, 2020: A simulation study of kelvin waves interacting with synoptic events during December 2016 in the South China sea and maritime continent. J. Climate, 33, 63456359, https://doi.org/10.1175/JCLI-D-20-0121.1.

    • Search Google Scholar
    • Export Citation
  • Hsu, L.-H., D.-R. Chen, C.-C. Chiang, J.-L. Chu, Y.-C. Yu, and C.-C. Wu, 2021: Simulations of the East Asian winter monsoon on subseasonal to seasonal time scales using the model for prediction across scales. Atmosphere, 12, 865, https://doi.org/10.3390/atmos12070865.

    • Search Google Scholar
    • Export Citation
  • Hsu, P.-C., T. Li, L. You, J. Gao, and H.-L. Ren, 2015: A spatial–temporal projection model for 10–30 day rainfall forecast in South China. Climate Dyn., 44, 12271244, https://doi.org/10.1007/s00382-014-2215-4.

    • Search Google Scholar
    • Export Citation
  • Huang, C.-Y., Y. Zhang, W. C. Skamarock, and L.-H. Hsu, 2017: Influences of large-scale flow variations on the track evolution of typhoons Morakot (2009) and Megi (2010): Simulations with a global variable-resolution model. Mon. Wea. Rev., 145, 16911716, https://doi.org/10.1175/MWR-D-16-0363.1.

    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, https://doi.org/10.1029/2008JD009944.

    • Search Google Scholar
    • Export Citation
  • Imberger, M., X. G. Larsén, and N. Davis, 2021: Investigation of spatial and temporal wind-speed variability during open cellular convection with the model for prediction across scales in comparison with measurements. Bound.-Layer Meteor., 179, 291312, https://doi.org/10.1007/s10546-020-00591-0.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., 2004: The Kain–Fritsch convective parameterization: An update. J. Appl. Meteor. Climatol., 43, 170181, https://doi.org/10.1175/1520-0450(2004)043<0170:TKCPAU>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kessler, E., 1969: On the Distribution and Continuity of Water Substance in Atmospheric Circulations. Meteor. Monogr., No. 10, Amer. Meteor. Soc., 84 pp., https://doi.org/10.1007/978-1-935704-36-2_1.

  • Klingaman, N. P., and Coauthors, 2021: Subseasonal prediction performance for austral summer South American rainfall. Wea. Forecasting, 36, 147169, https://doi.org/10.1175/WAF-D-19-0203.1.

    • Search Google Scholar
    • Export Citation
  • Kramer, M., D. Heinzeller, H. Hartmann, W. van den Berg, and G.-J. Steeneveld, 2020: Assessment of MPAS variable resolution simulations in the grey-zone of convection against WRF model results and observations. Climate Dyn., 55, 253276, https://doi.org/10.1007/s00382-018-4562-z.

    • Search Google Scholar
    • Export Citation
  • Li, W., S. Hu, P.-C. Hsu, W. Guo, and J. Wei, 2020: Systematic bias of Tibetan Plateau snow cover in subseasonal-to-seasonal models. Cryosphere, 14, 35653579, https://doi.org/10.5194/tc-14-3565-2020.

    • Search Google Scholar
    • Export Citation
  • Lui, Y. S., C.-Y. Tam, L. K.-S. Tse, K.-K. Ng, W.-N. Leung, and C. C. Cheung, 2020: Evaluation of a customized variable-resolution global model and its application for high-resolution weather forecasts in East Asia. Earth Space Sci., 7, e2020EA001228, https://doi.org/10.1029/2020EA001228.

  • Lui, Y. S., L. K. S. Tse, C.-Y. Tam, K. H. Lau, and J. Chen, 2021: Performance of MPAS-A and WRF in predicting and simulating western North Pacific tropical cyclone tracks and intensities. Theor. Appl. Climatol., 143, 505520, https://doi.org/10.1007/s00704-020-03444-5.

    • Search Google Scholar
    • Export Citation
  • Maoyi, M. L., and B. J. Abiodun, 2021: How well does MPAS-atmosphere simulate the characteristics of the Botswana High? Climate Dyn., 57, 21092128, https://doi.org/10.1007/s00382-021-05797-7.

    • Search Google Scholar
    • Export Citation
  • Michaelis, A. C., G. M. Lackmann, and W. A. Robinson, 2019: Evaluation of a unique approach to high-resolution climate modeling using the Model for Prediction Across Scales–Atmosphere (MPAS-A) version 5.1. Geosci. Model Dev., 12, 37253743, https://doi.org/10.5194/gmd-12-3725-2019.

    • Search Google Scholar
    • Export Citation
  • Monin, A. S., and A. M. Obukhov, 1954: Basic laws of turbulent mixing in the surface layer of the atmosphere. Contrib. Geophys. Inst. Acad. Sci. USSR, 24, 163–187.

  • Nakanishi, M., and H. Niino, 2006: An improved Mellor–Yamada level-3 model: Its numerical stability and application to a regional prediction of advection fog. Bound.-Layer Meteor., 119, 397407, https://doi.org/10.1007/s10546-005-9030-8.

    • Search Google Scholar
    • Export Citation
  • Pilon, R., C. Zhang, and J. Dudhia, 2016: Roles of deep and shallow convection and microphysics in the MJO simulated by the model for prediction across scales. J. Geophys. Res. Atmos., 121, 10 575–510 600, https://doi.org/10.1002/2015JD024697.

  • Qian, Y., P.-C. Hsu, H. Murakami, B. Xiang, and L. You, 2020: A hybrid dynamical-statistical model for advancing subseasonal tropical cyclone prediction over the western North Pacific. Geophys. Res. Lett., 47, e2020GL090095, https://doi.org/10.1029/2020GL090095.

  • Schwartz, C. S., 2019: Medium-range convection-allowing ensemble forecasts with a variable-resolution global model. Mon. Wea. Rev., 147, 29973023, https://doi.org/10.1175/MWR-D-18-0452.1.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., J. B. Klemp, M. G. Duda, L. D. Fowler, S.-H. Park, and T. D. Ringler, 2012: A multiscale nonhydrostatic atmospheric model using centroidal Voronoi tesselations and C-grid staggering. Mon. Wea. Rev., 140, 30903105, https://doi.org/10.1175/MWR-D-11-00215.1.

    • Search Google Scholar
    • Export Citation
  • Strachan, J., P. L. Vidale, K. Hodges, M. Roberts, and M.-E. Demory, 2013: Investigating global tropical cyclone activity with a hierarchy of AGCMs: The role of model resolution. J. Climate, 26, 133152, https://doi.org/10.1175/JCLI-D-12-00012.1.

    • Search Google Scholar
    • Export Citation
  • Thompson, G., P. R. Field, R. M. Rasmussen, and W. D. Hall, 2008: Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part II: Implementation of a new snow parameterization. Mon. Wea. Rev., 136, 50955115, https://doi.org/10.1175/2008MWR2387.1.

    • Search Google Scholar
    • Export Citation
  • Tian, X., and X. Zou, 2021: Validation of a prototype global 4D-Var data assimilation system for the MPAS-atmosphere model. Mon. Wea. Rev., 149, 28032817, https://doi.org/10.1175/MWR-D-20-0408.1.

    • Search Google Scholar
    • Export Citation
  • Vitart, F., 2017: Madden–Julian oscillation prediction and teleconnections in the S2S database. Quart. J. Roy. Meteor. Soc., 143, 22102220, https://doi.org/10.1002/qj.3079.

    • Search Google Scholar
    • Export Citation
  • Vitart, F., and A. W. Robertson, 2018: The sub-seasonal to seasonal prediction project (S2S) and the prediction of extreme events. npj Climate Atmos. Sci., 1, 3, https://doi.org/10.1038/s41612-018-0013-0.

    • Search Google Scholar
    • Export Citation
  • Vitart, F., and Coauthors, 2017: The Subseasonal to Seasonal (S2S) prediction project database. Bull. Amer. Meteor. Soc., 98, 163173, https://doi.org/10.1175/BAMS-D-16-0017.1.

    • Search Google Scholar
    • Export Citation
  • Wang, H., A. Kumar, A. Diawara, D. DeWitt, and J. Gottschalck, 2021: Dynamical–statistical prediction of week-2 severe weather for the United States. Wea. Forecasting, 36, 109125, https://doi.org/10.1175/WAF-D-20-0009.1.

    • Search Google Scholar
    • Export Citation
  • Weber, N. J., and C. F. Mass, 2019: Subseasonal weather prediction in a global convection-permitting model. Bull. Amer. Meteor. Soc., 100, 10791089, https://doi.org/10.1175/BAMS-D-18-0210.1.

    • Search Google Scholar
    • Export Citation
  • Weber, N. J., C. F. Mass, and D. Kim, 2020: The impacts of horizontal grid spacing and cumulus parameterization on subseasonal prediction in a global convection-permitting model. Mon. Wea. Rev., 148, 47474765, https://doi.org/10.1175/MWR-D-20-0171.1.

    • Search Google Scholar
    • Export Citation
  • White, C. J., and Coauthors, 2017: Potential applications of subseasonal-to-seasonal (S2S) predictions. Meteor. Appl., 24, 315325, https://doi.org/10.1002/met.1654.

    • Search Google Scholar
    • Export Citation
  • Yan, Y., B. Liu, and C. Zhu, 2021: Subseasonal predictability of South China Sea summer monsoon onset with the ECMWF S2S forecasting system. Geophys. Res. Lett., 48, e2021GL095943, https://doi.org/10.1029/2021GL095943.

  • Zhang, C., and Y. Wang, 2017: Projected future changes of tropical cyclone activity over the western North and South Pacific in a 20-km-mesh regional climate model. J. Climate, 30, 59235941, https://doi.org/10.1175/JCLI-D-16-0597.1.

    • Search Google Scholar
    • Export Citation
  • Zhao, C., and Coauthors, 2019: Modeling extreme precipitation over East China with a global variable-resolution modeling framework (MPASv5.2): Impacts of resolution and physics. Geosci. Model Dev., 12, 27072726, https://doi.org/10.5194/gmd-12-2707-2019.

    • Search Google Scholar
    • Export Citation
  • Zhu, J., A. Kumar, and W. Wang, 2020: Dependence of MJO predictability on convective parameterizations. J. Climate, 33, 47394750, https://doi.org/10.1175/JCLI-D-18-0552.1.

    • Search Google Scholar
    • Export Citation
Save
  • Anstey, J. A., and Coauthors, 2013: Multi-model analysis of Northern Hemisphere winter blocking: Model biases and the role of resolution. J. Geophys. Res. Atmos., 118, 39563971, https://doi.org/10.1002/jgrd.50231.

    • Search Google Scholar
    • Export Citation
  • Campbell, P. C., J. O. Bash, J. A. Herwehe, R. C. Gilliam, and D. Li, 2020: Impacts of tiled land cover characterization on global meteorological predictions using the MPAS-A. J. Geophys. Res. Atmos., 125, e2019JD032093, https://doi.org/10.1029/2019JD032093.

  • Cao, Q., S. Shukla, M. J. DeFlorio, F. M. Ralph, and D. P. Lettenmaier, 2021: Evaluation of the subseasonal forecast skill of floods associated with atmospheric rivers in coastal western U.S. watersheds. J. Hydrometeor., 22, 15351552, https://doi.org/10.1175/jhm-d-20-0219.1.

    • Search Google Scholar
    • Export Citation
  • Collins, W. D., and Coauthors, 2004: Description of the NCAR Community Atmosphere Model (CAM 3.0). NCAR Tech. Note NCAR/TN-464+STR, 214 pp., https://doi.org/10.5065/D63N21CH.

  • de Andrade, F. M., C. A. S. Coelho, and I. F. A. Cavalcanti, 2019: Global precipitation hindcast quality assessment of the subseasonal to seasonal (S2S) prediction project models. Climate Dyn., 52, 54515475, https://doi.org/10.1007/s00382-018-4457-z.

    • Search Google Scholar
    • Export Citation
  • Diro, G. T., and H. Lin, 2020: Subseasonal forecast skill of snow water equivalent and its link with temperature in selected SubX models. Wea. Forecasting, 35, 273284, https://doi.org/10.1175/WAF-D-19-0074.1.

    • Search Google Scholar
    • Export Citation
  • Dutra, E., F. Johannsen, and L. Magnusson, 2021: Late spring and summer subseasonal forecasts in the Northern Hemisphere midlatitudes: Biases and skill in the ECMWF model. Mon. Wea. Rev., 149, 26592671, https://doi.org/10.1175/MWR-D-20-0342.1.

    • Search Google Scholar
    • Export Citation
  • Efron, B., 1979: Bootstrap methods: Another look at the jackknife. Ann. Stat., 7, 126, https://doi.org/10.1214/aos/1176344552.

  • Ek, M. B., K. E. Mitchell, Y. Lin, E. Rogers, P. Grunmann, V. Koren, G. Gayno, and J. D. Tarpley, 2003: Implementation of Noah land surface model advances in the national centers for environmental prediction operational mesoscale Eta model. J. Geophys. Res., 108, 8851, https://doi.org/10.1029/2002JD003296.

    • Search Google Scholar
    • Export Citation
  • Gao, X., Y. Xu, Z. Zhao, J. S. Pal, and F. Giorgi, 2006: On the role of resolution and topography in the simulation of East Asia precipitation. Theor. Appl. Climatol., 86, 173185, https://doi.org/10.1007/s00704-005-0214-4.

    • Search Google Scholar
    • Export Citation
  • Gilliam, R. C., J. A. Herwehe, O. R. Bullock Jr, J. E. Pleim, L. Ran, P. C. Campbell, and H. Foroutan, 2021: Establishing the suitability of the model for prediction across scales for global retrospective air quality modeling. J. Geophys. Res. Atmos., 126, e2020JD033588, https://doi.org/10.1029/2020JD033588.

  • Grell, G. A., and S. R. Freitas, 2014: A scale and aerosol aware stochastic convective parameterization for weather and air quality modeling. Atmos. Chem. Phys., 14, 52335250, https://doi.org/10.5194/acp-14-5233-2014.

    • Search Google Scholar
    • Export Citation
  • Ha, S., C. Snyder, W. C. Skamarock, J. Anderson, and N. Collins, 2017: Ensemble Kalman filter data assimilation for the model for prediction across scales (MPAS). Mon. Wea. Rev., 145, 46734692, https://doi.org/10.1175/MWR-D-17-0145.1.

    • Search Google Scholar
    • Export Citation
  • Hack, J. J., J. M. Caron, G. Danabasoglu, K. W. Oleson, C. Bitz, and J. E. Truesdale, 2006: CCSM–CAM3 climate simulation sensitivity to changes in horizontal resolution. J. Climate, 19, 22672289, https://doi.org/10.1175/JCLI3764.1.

    • Search Google Scholar
    • Export Citation
  • Hersbach, H., and Coauthors, 2020: The ERA5 global reanalysis. Quart. J. Roy. Meteor. Soc., 146, 19992049, https://doi.org/10.1002/qj.3803.

    • Search Google Scholar
    • Export Citation
  • Hong, S.-Y., and J.-O. J. Lim, 2006: The WRF single-moment 6-class microphysics scheme (WSM6). Asia-Pac. J. Atmos. Sci., 42, 129151.

  • Hong, S.-Y., Y. Noh, and J. Dudhia, 2006: A new vertical diffusion package with an explicit treatment of entrainment processes. Mon. Wea. Rev., 134, 23182341, https://doi.org/10.1175/MWR3199.1.

    • Search Google Scholar
    • Export Citation
  • Hsu, L.-H., L.-S. Tseng, S.-Y. Hou, B.-F. Chen, and C.-H. Sui, 2020: A simulation study of kelvin waves interacting with synoptic events during December 2016 in the South China sea and maritime continent. J. Climate, 33, 63456359, https://doi.org/10.1175/JCLI-D-20-0121.1.

    • Search Google Scholar
    • Export Citation
  • Hsu, L.-H., D.-R. Chen, C.-C. Chiang, J.-L. Chu, Y.-C. Yu, and C.-C. Wu, 2021: Simulations of the East Asian winter monsoon on subseasonal to seasonal time scales using the model for prediction across scales. Atmosphere, 12, 865, https://doi.org/10.3390/atmos12070865.

    • Search Google Scholar
    • Export Citation
  • Hsu, P.-C., T. Li, L. You, J. Gao, and H.-L. Ren, 2015: A spatial–temporal projection model for 10–30 day rainfall forecast in South China. Climate Dyn., 44, 12271244, https://doi.org/10.1007/s00382-014-2215-4.

    • Search Google Scholar
    • Export Citation
  • Huang, C.-Y., Y. Zhang, W. C. Skamarock, and L.-H. Hsu, 2017: Influences of large-scale flow variations on the track evolution of typhoons Morakot (2009) and Megi (2010): Simulations with a global variable-resolution model. Mon. Wea. Rev., 145, 16911716, https://doi.org/10.1175/MWR-D-16-0363.1.

    • Search Google Scholar
    • Export Citation
  • Iacono, M. J., J. S. Delamere, E. J. Mlawer, M. W. Shephard, S. A. Clough, and W. D. Collins, 2008: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer models. J. Geophys. Res., 113, D13103, https://doi.org/10.1029/2008JD009944.

    • Search Google Scholar
    • Export Citation
  • Imberger, M., X. G. Larsén, and N. Davis, 2021: Investigation of spatial and temporal wind-speed variability during open cellular convection with the model for prediction across scales in comparison with measurements. Bound.-Layer Meteor., 179, 291312, https://doi.org/10.1007/s10546-020-00591-0.

    • Search Google Scholar
    • Export Citation
  • Kain, J. S., 2004: The Kain–Fritsch convective parameterization: An update. J. Appl. Meteor. Climatol., 43, 170181, https://doi.org/10.1175/1520-0450(2004)043<0170:TKCPAU>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Kessler, E., 1969: On the Distribution and Continuity of Water Substance in Atmospheric Circulations. Meteor. Monogr., No. 10, Amer. Meteor. Soc., 84 pp., https://doi.org/10.1007/978-1-935704-36-2_1.

  • Klingaman, N. P., and Coauthors, 2021: Subseasonal prediction performance for austral summer South American rainfall. Wea. Forecasting, 36, 147169, https://doi.org/10.1175/WAF-D-19-0203.1.

    • Search Google Scholar
    • Export Citation
  • Kramer, M., D. Heinzeller, H. Hartmann, W. van den Berg, and G.-J. Steeneveld, 2020: Assessment of MPAS variable resolution simulations in the grey-zone of convection against WRF model results and observations. Climate Dyn., 55, 253276, https://doi.org/10.1007/s00382-018-4562-z.

    • Search Google Scholar
    • Export Citation
  • Li, W., S. Hu, P.-C. Hsu, W. Guo, and J. Wei, 2020: Systematic bias of Tibetan Plateau snow cover in subseasonal-to-seasonal models. Cryosphere, 14, 35653579, https://doi.org/10.5194/tc-14-3565-2020.

    • Search Google Scholar
    • Export Citation
  • Lui, Y. S., C.-Y. Tam, L. K.-S. Tse, K.-K. Ng, W.-N. Leung, and C. C. Cheung, 2020: Evaluation of a customized variable-resolution global model and its application for high-resolution weather forecasts in East Asia. Earth Space Sci., 7, e2020EA001228, https://doi.org/10.1029/2020EA001228.

  • Lui, Y. S., L. K. S. Tse, C.-Y. Tam, K. H. Lau, and J. Chen, 2021: Performance of MPAS-A and WRF in predicting and simulating western North Pacific tropical cyclone tracks and intensities. Theor. Appl. Climatol., 143, 505520, https://doi.org/10.1007/s00704-020-03444-5.

    • Search Google Scholar
    • Export Citation
  • Maoyi, M. L., and B. J. Abiodun, 2021: How well does MPAS-atmosphere simulate the characteristics of the Botswana High? Climate Dyn., 57, 21092128, https://doi.org/10.1007/s00382-021-05797-7.

    • Search Google Scholar
    • Export Citation
  • Michaelis, A. C., G. M. Lackmann, and W. A. Robinson, 2019: Evaluation of a unique approach to high-resolution climate modeling using the Model for Prediction Across Scales–Atmosphere (MPAS-A) version 5.1. Geosci. Model Dev., 12, 37253743, https://doi.org/10.5194/gmd-12-3725-2019.

    • Search Google Scholar
    • Export Citation
  • Monin, A. S., and A. M. Obukhov, 1954: Basic laws of turbulent mixing in the surface layer of the atmosphere. Contrib. Geophys. Inst. Acad. Sci. USSR, 24, 163–187.

  • Nakanishi, M., and H. Niino, 2006: An improved Mellor–Yamada level-3 model: Its numerical stability and application to a regional prediction of advection fog. Bound.-Layer Meteor., 119, 397407, https://doi.org/10.1007/s10546-005-9030-8.

    • Search Google Scholar
    • Export Citation
  • Pilon, R., C. Zhang, and J. Dudhia, 2016: Roles of deep and shallow convection and microphysics in the MJO simulated by the model for prediction across scales. J. Geophys. Res. Atmos., 121, 10 575–510 600, https://doi.org/10.1002/2015JD024697.

  • Qian, Y., P.-C. Hsu, H. Murakami, B. Xiang, and L. You, 2020: A hybrid dynamical-statistical model for advancing subseasonal tropical cyclone prediction over the western North Pacific. Geophys. Res. Lett., 47, e2020GL090095, https://doi.org/10.1029/2020GL090095.

  • Schwartz, C. S., 2019: Medium-range convection-allowing ensemble forecasts with a variable-resolution global model. Mon. Wea. Rev., 147, 29973023, https://doi.org/10.1175/MWR-D-18-0452.1.

    • Search Google Scholar
    • Export Citation
  • Skamarock, W. C., J. B. Klemp, M. G. Duda, L. D. Fowler, S.-H. Park, and T. D. Ringler, 2012: A multiscale nonhydrostatic atmospheric model using centroidal Voronoi tesselations and C-grid staggering. Mon. Wea. Rev., 140, 30903105, https://doi.org/10.1175/MWR-D-11-00215.1.

    • Search Google Scholar
    • Export Citation
  • Strachan, J., P. L. Vidale, K. Hodges, M. Roberts, and M.-E. Demory, 2013: Investigating global tropical cyclone activity with a hierarchy of AGCMs: The role of model resolution. J. Climate, 26, 133152, https://doi.org/10.1175/JCLI-D-12-00012.1.

    • Search Google Scholar
    • Export Citation
  • Thompson, G., P. R. Field, R. M. Rasmussen, and W. D. Hall, 2008: Explicit forecasts of winter precipitation using an improved bulk microphysics scheme. Part II: Implementation of a new snow parameterization. Mon. Wea. Rev., 136, 50955115, https://doi.org/10.1175/2008MWR2387.1.

    • Search Google Scholar
    • Export Citation
  • Tian, X., and X. Zou, 2021: Validation of a prototype global 4D-Var data assimilation system for the MPAS-atmosphere model. Mon. Wea. Rev., 149, 28032817, https://doi.org/10.1175/MWR-D-20-0408.1.

    • Search Google Scholar
    • Export Citation
  • Vitart, F., 2017: Madden–Julian oscillation prediction and teleconnections in the S2S database. Quart. J. Roy. Meteor. Soc., 143, 22102220, https://doi.org/10.1002/qj.3079.

    • Search Google Scholar
    • Export Citation
  • Vitart, F., and A. W. Robertson, 2018: The sub-seasonal to seasonal prediction project (S2S) and the prediction of extreme events. npj Climate Atmos. Sci., 1, 3, https://doi.org/10.1038/s41612-018-0013-0.

    • Search Google Scholar
    • Export Citation
  • Vitart, F., and Coauthors, 2017: The Subseasonal to Seasonal (S2S) prediction project database. Bull. Amer. Meteor. Soc., 98, 163173, https://doi.org/10.1175/BAMS-D-16-0017.1.

    • Search Google Scholar
    • Export Citation
  • Wang, H., A. Kumar, A. Diawara, D. DeWitt, and J. Gottschalck, 2021: Dynamical–statistical prediction of week-2 severe weather for the United States. Wea. Forecasting, 36, 109125, https://doi.org/10.1175/WAF-D-20-0009.1.

    • Search Google Scholar
    • Export Citation
  • Weber, N. J., and C. F. Mass, 2019: Subseasonal weather prediction in a global convection-permitting model. Bull. Amer. Meteor. Soc., 100, 10791089, https://doi.org/10.1175/BAMS-D-18-0210.1.

    • Search Google Scholar
    • Export Citation
  • Weber, N. J., C. F. Mass, and D. Kim, 2020: The impacts of horizontal grid spacing and cumulus parameterization on subseasonal prediction in a global convection-permitting model. Mon. Wea. Rev., 148, 47474765, https://doi.org/10.1175/MWR-D-20-0171.1.

    • Search Google Scholar
    • Export Citation
  • White, C. J., and Coauthors, 2017: Potential applications of subseasonal-to-seasonal (S2S) predictions. Meteor. Appl., 24, 315325, https://doi.org/10.1002/met.1654.

    • Search Google Scholar
    • Export Citation
  • Yan, Y., B. Liu, and C. Zhu, 2021: Subseasonal predictability of South China Sea summer monsoon onset with the ECMWF S2S forecasting system. Geophys. Res. Lett., 48, e2021GL095943, https://doi.org/10.1029/2021GL095943.

  • Zhang, C., and Y. Wang, 2017: Projected future changes of tropical cyclone activity over the western North and South Pacific in a 20-km-mesh regional climate model. J. Climate, 30, 59235941, https://doi.org/10.1175/JCLI-D-16-0597.1.

    • Search Google Scholar
    • Export Citation
  • Zhao, C., and Coauthors, 2019: Modeling extreme precipitation over East China with a global variable-resolution modeling framework (MPASv5.2): Impacts of resolution and physics. Geosci. Model Dev., 12, 27072726, https://doi.org/10.5194/gmd-12-2707-2019.

    • Search Google Scholar
    • Export Citation
  • Zhu, J., A. Kumar, and W. Wang, 2020: Dependence of MJO predictability on convective parameterizations. J. Climate, 33, 47394750, https://doi.org/10.1175/JCLI-D-18-0552.1.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    The systematic biases of wintertime surface air temperature (SAT; K) in week-2 forecasts by the (a) MPAS, (b) CMA, and (c) ECMWF models against those in ERA5 reanalysis data. The magenta boxes indicate the subregions defined for regional averaging in Fig. 2.

  • Fig. 2.

    Regional averaging of the systematic biases of wintertime SAT (K) in week-2 forecasts by the MPAS, CMA, and ECMWF models against those in the ERA5 reanalysis data over the Northern Hemisphere (NH; north of 20°N), East Asia (EA), Europe (EU), and North America (NA). The areas defined for averaging are indicated by the magenta outlines in Fig. 1.

  • Fig. 3.

    Forecast skill of the SAT in the models during wintertime. The spatial pattern of temporal correlation coefficients (TCCs) between the week-2 forecasted SAT anomalies in the (a) MPAS, (b) CMA, and (c) ECMWF models against those in the ERA5 reanalysis data. (d)–(f) As in (a)–(c), but for the root-mean-square errors (RMSEs; standard deviations). The magenta boxes indicate the subregions defined for regional averaging in Fig. 4.

  • Fig. 4.

    Regional averaging of the spatial TCCs and root-mean-square errors (RMSEs) shown in Fig. 3 over (a) the NH (north of 20°N), (b) EA, (c) EU, and (d) NA. The areas defined for averaging are indicated by the magenta outlines in Fig. 3. The x axis and y axis represent TCC and RMSE, respectively. Note that the y axis (RMSE) has been reversed. A higher TCC and lower RMSE indicate better forecast skill.

  • Fig. 5.

    The spatial pattern of TCCs between (a)–(c) the forecasted sea level pressure (SLP), (d)–(f) geopotential height at 500 hPa (H500), and (g)–(i) zonal wind at 200 hPa (U200) in the (left) MPAS, (center) CMA, and (right) ECMWF models with a lead of 2 weeks against those in the ERA5 reanalysis data. The magenta boxes indicate the subregions defined for regional averaging in Fig. 7.

  • Fig. 6.

    As in Fig. 5, but for the RMSEs (standard deviations).

  • Fig. 7.

    Regional averaging of the spatial TCCs and RMSEs for (a) SLP, (b) H500, and (c) U200 over the Northern Hemisphere (north of 20°N). The x axis and y axis represent TCC and RMSE, respectively. Note that the y axis (RMSE) has been reversed. A higher TCC and lower RMSE indicate better forecast skill. The areas defined for averaging are indicated by the magenta outlines in Figs. 5 and 6.

  • Fig. 8.

    Regional averaging of the spatial TCCs and RMSEs over (a) the NH (north of 20°N), (b) EA, (c) EU, and (d) NA for SAT. TCCs and RMSEs are calculated using the forecasted SAT in the MPAS, 10 S2S models, and MPAS-60-km with a lead of 2 weeks against those in the ERA5 reanalysis data. The areas defined for averaging are indicated by the magenta outlines in Fig. 3. The x axis and y axis represent TCC and RMSE, respectively. Note that the y axis (RMSE) has been reversed. A higher TCC and lower RMSE indicate better forecast skill. The ranges of the axes in each plot are different. Dashed lines represent the 5% level of significance intervals calculated using a bootstrap resampling method for MPAS.

  • Fig. 9.

    As in Fig. 8, but calculated using the forecasted SAT in the MPAS by default parameterization schemes and 10 other sensitivity test experiments, as listed in Table 1. Dashed lines represent the 5% level of significance intervals calculated using a bootstrap resampling method for Exp-default.

  • Fig. 10.

    As in Fig. 8, but calculated using the forecasted SAT in the MPAS and 10 S2S models with leads of (a)–(d) 3 weeks and (e)–(h) 4 weeks against those in the ERA5 reanalysis data.

  • Fig. 11.

    Regional averaging of the (a) spatial TCCs and (b) RMSEs (standard deviations) over the Northern Hemisphere (north of 20°N), which are calculated using the forecasted SAT in the MPAS (integration with constant SST) and MPAS-SST (integration with daily updated SST) against those in the ERA5 reanalysis data. The x axis represents the forecast lead (weeks). The y axis represents TCCs or RMSEs. Error bars represent the 5% level of significance intervals calculated using a bootstrap resampling method.

All Time Past Year Past 30 Days
Abstract Views 498 0 0
Full Text Views 387 223 12
PDF Downloads 365 201 11