1. Introduction
Coupled global climate models are the primary tool for predicting how climate may change in the near and distance future, i.e., from seasonal to century time scales. Compared to observations, their simulations have systematic biases, which can be caused by a variety of factors (Zhang et al. 2020). Due to computational resource constraints, coarse resolution (∼1°) continues to be commonly used by coupled global climate models (i.e., CMIP6; Eyring et al. 2016) and many studies of ocean–atmosphere interactions. The coarse-resolution models are insufficient for resolving important physical processes at smaller scales, which limits their usefulness for regional climate prediction and climate impact assessments (Maraun et al. 2010; Watt-Meyer et al. 2021).
Dynamical downscaling using regional climate models (RCMs) can transfer global coarse simulations to finer regional or local scales of interest (Pielke and Wilby 2012; Zhou et al. 2018). RCMs have higher resolution and are driven by initial and boundary conditions obtained from simulations of global climate models, which enable us to assess more spatially detailed information at local to regional scales (Chokkavarapu and Mandla 2019; Rockel 2015). RCMs are also widely used to predict regional climate, which require the initial and boundary conditions supplied by global climate models (Maraun et al. 2010; Tang et al. 2016). To date, more than 60 RCMs have been developed and used to provide regional climate simulations and predictions worldwide (Sangelantoni et al. 2019). The accuracy of the RCMs’ downscaled simulations and climate predictions is strongly influenced by systematic biases of global climate models, which can be transferred to RCMs through the initial and boundary conditions. This affects the assessment of present climate simulations and future change projections (Collins and Allen 2002; Moalafhi et al. 2016, 2017). Thus, the bias correction for global climate model simulations is needed, rather than directly using them as the initial and boundary conditions of RCMs.
Sea surface temperature (SST) is an essential climate variable in ocean–atmosphere interactions, forcing the atmospheric variability. Air–sea turbulent fluxes of heat, moisture, and momentum establish the link between SST changes and atmospheric variability, providing mechanisms of ocean–atmosphere interactions (e.g., Bourassa et al. 2013). SST simulated by global climate models is used as the initial and boundary conditions for RCMs, which strongly influences the performance of RCMs, especially in areas with strong atmosphere–ocean interactions (Rockel 2015). For example, SST forecast products from the NCEP Climate Forecast System (CFSv2) and ECMWF’s long-range forecasting system (SEAS5) are widely used by RCMs for weather and climate predictions (Abhilash et al. 2014; Tietsche et al. 2020).
Saha et al. (2014) showed that the SST bias in CFSv2 is small in the tropical ocean, but large in mid- and high-latitude oceans, and the bias is significant in winter. Johnson et al. (2019) showed that SEAS5 has warm SST bias in the northern tropical ocean and different bias in the northern extratropical ocean. The error of the forecast SST at several months lead times by global climate models varies with location and time. Thus, there is large uncertainty caused by directly using the forecast SST from global climate models as the input of RCM, which might have profound impacts on the local to regional simulations. Tietsche et al. (2020) presented evidences that the forecast DJF SST bias of SEAS5 in the subpolar North Atlantic Ocean can lead to the regional skill degradation and wide influence on atmospheric forecast. The subseasonal-to-seasonal forecast is strongly influenced by both initial and boundary SST conditions, with the latter becoming increasingly more important than the former as the forecast period lengthens (Bo et al. 2020; Meehl et al. 2021; Robertson et al. 2020).
Correcting the simulated or predicted SST bias is challenging. Some studies corrected the SST bias using simple arithmetic mean or combining weighted multimodels, which showed some effects but were not effective to correct extreme fluctuations (e.g., Knutti et al. 2010). Recently, Liu and Ren (2017) developed an analog-based correction method to correct CFSv2 SST empirically using historical forecast errors, which improves the predictive skill of El Niño–Southern Oscillation (ENSO). Hou et al. (2021) used a local dynamical analog algorithm to correct CFSv2 SST, which showed some positive effects on the predictive skill in some regions. However, these correction methods were mainly based on linear theory, which were not effective for correcting nonlinear biases in the simulated or predicted SST.
Significant progress has been made to nonlinear technology in the past decade, such as utilizing artificial neural networks, which can help better deal with the existence of nonlinearity and nonstationary characteristics in climate studies (LeCun et al. 2015; Reichstein et al. 2019). For example, neural networks have been successfully applied to short-term weather and climate predictions (Sarkar et al. 2020; Wu et al. 2005; Xiao et al. 2019) and to reduce biases in satellite prediction products (Tao et al. 2016; Yang et al. 2022).
The goal of this paper is to examine to what extent neural networks can correct the nonstationary SST bias in the NCEP CFSv2 seasonal forecast compared to the observation. Specifically, we construct single- and three-hidden-layer neural networks as well as their ensemble averaging and compare their effectiveness in reducing spatial and temporal SST bias in the extratropical ocean in the CFSv2 seasonal forecast.
2. Data and methods
In this study, SST from the NCEP CFSv2 (Saha et al. 2014) operational 9-month forecast is used. The CFSv2 forecast runs are initialized from 0000, 0600, 1200, and 1800 UTC for each initial day, which provides valuable real-time data for many aspects of seasonal climate prediction. Here we focus on the CFSv2 forecast initiated at 1200 UTC from 1 July to 31 January of the following year for each year from 2011 to 2018 (i.e., for 2011, it is from 1 July 2011 to 31 January 2012). The CFSv2 forecast SST has a spatial resolution of 1° and a temporal resolution of 6 h. The 1/4° daily NOAA Optimum Interpolation Sea Surface Temperature version 2 (OISST; Reynolds et al. 2002) is also utilized, which is interpolated to the CFSv2 1° grid cell. Hereafter, we refer to OISST as the observation. In this study, we focus on the SST bias correction for the CFSv2 forecast in the mid- and high latitudes of the Northern Hemisphere.
Here we divide the CFSv2 forecast SST into two groups: the data from 2011 to 2017 are used as the training data, whereas the data in 2018 are used as the testing data that do not participate in the training of the neural networks. For the training, both ANN1 and ANN3 randomly divide the data during 2011–17 into training sample, validation sample, and testing sample (three subsets have no overlap). Their corresponding ratios are 70%, 15%, and 15%. The training sample helps find the relationship between the input and the target; the validation sample helps evaluate whether the network finds the optimal relationship from the training sample and adjusts the parameters to avoid overfitting. The testing data are used to further evaluate the performance of the network as the independent sample.
To determine the optimal number of neurons for ANN1 and ANN3, we test 1, 5, 10, 15, 20, 25, and 30, neurons for ANN1, and [1, 1, 1], [5, 5, 5], [10, 10, 10], [15, 15, 15], [20, 20, 20], [25, 25, 25], and [30, 30, 30] neurons for ANN3. The correlation coefficient (R) and root-mean-square error (RMSE) between the output of ANN1 or ANN3 and observed SST are used to determine the performance of these neural networks.
3. Results
a. Single-hidden-layer neural network
As shown in Fig. 2a, the evaluation based on the test sample of ANN1 suggests that the RMSE for training data decreases with increasing number of neurons, but it tends to level off as the number of neurons is greater than 15. Meanwhile, the correlation coefficient between the output of ANN1 or ANN3 and observed SST (hereafter referred to as R) increases with increasing number of neurons and approaches to an equilibrium as the number of neurons is greater than 15 (Fig. 3a). This is also the case based on the independent testing data in 2018 (Figs. 2b and 3b). Thus, we choose 15 neurons to develop ANN1.
Previous studies suggested the neural network may be sensitive to network parameters such as initially assigned weight and bias (e.g., Liu et al. 2019). Here we run ANN1 (with 15 neurons) 20 times with random weights and biases assigned initially and train by randomly selected training sample from training data during 2011–17. The performance of each member in the ensemble-based ANN1 is shown in Figs. 2c and 2d. It appears that the RMSEs identified by the 20 ANN1s vary from 1.10° to 1.24°C associated with weight and bias and randomly sampling, though the R has minimal change (Figs. 3c,d). To obtain a robust network and reduce the sensitivity to the initial parameters associated with large size of SST data, we compute the ensemble averaging of the 20 ANN1s as the final output. The ensemble averaging of the 20 ANN3s has the RMSE of 1.09°C (1.14°C) for the training (testing) data, which are smaller than all 20 ANN1s. Note that the ensemble averaging also tends to reduce the effect of singular values.
Table 1 shows the RMSE between SST of CFSv2, ANN1, ANN3, and OISST for the testing data in 2018. The ensemble-based ANN1 reduces the RMSE between CFSv2 predicted SST and OISST by about 0.35°C for the testing data in the extratropical Northern Hemisphere. The ensemble-based ANN3 further reduces the RMSE relatively by about 0.49°C. This is also reflected for five selected regions in the Atlantic and Pacific. The RMSE between the CFSv2 forecast SST and OISST is decreased by 0.95°, 0.56°, 0.53°, 0.46°, and 0.29°C in ANN1 for region 1, 2, 3, 4, and 5, respectively, which is further decreased relatively by 0.99°, 0.58°, 0.67°, 0.76°, and 0.41°C in ANN3. Thus, ANN3 shows relatively better performance than ANN1 in correcting the bias of the CFSv2 forecast SST.
RMSE between SST of CFSv2, ANN1, ANN3, and OISST for the testing data in 2018.
As shown in Fig. 4 (left column), the spatial distribution of the SST bias corrected by the best and worst ANN1 are not entirely consistent; i.e., the best network shows that the corrected CFSv2 SST has warm bias in the subtropics and midlatitude of the central Pacific and Atlantic, whereas the worst network has opposite bias. Such discrepancy among individual network is largely due to random weights and biases assigned initially and the training sample (subset) of single neural network is randomly selected from the training data. The sensitivity to initial parameters value and the possible sampling dependence can be reduced with the ensemble averaging. The spatial distribution of the SST bias of the ensemble averaging of 20 ANN1s is in good agreement with that of 10 ANN1s (Figs. 4f,h).
We examine the spatial distribution of the nonstationary SST bias corrected by the ensemble averaging of 20 ANN1s at different forecast times in 2015 (the training data) and in 2018 (the testing data). Figure 5 shows the SST difference between the CFSv2 forecast and OISST, between the ensemble-based ANN1 and CFSv2 forecast, and between the ensemble-based ANN1 and OISST on 30 August (day 61 of the forecast) and 28 November (day 151 of the forecast). For the forecast SST on day 61 for 2015 and 2018 (same date in summer but different years), CFSv2 shows large warm biases extending from the midlatitude of the central Atlantic to the Barents–Kara Sea and along the east coast of Asia (in 2015) or the west coast of North America (in 2018) to the Chukchi–Beaufort Sea, and large cold biases in the subtropics and midlatitude of western Atlantic and central Pacific. The ensemble-based ANN1 significantly reduces the warm biases in the above areas (Figs. 5a,c). For the forecast SST on day 151 for 2015 and 2018 (same date in winter but different years), the widespread large cold biases in much of the Pacific and the subtropics and midlatitude of the western Atlantic are significantly reduced (Figs. 5b,d).
Next, we examine the evolution of the nonstationary SST bias corrected by the ensemble averaging of 20 ANN1s over time. Here we select three regions in the high latitudes of the Atlantic and Pacific (Fig. 6), including region 1 (60°–80°N, 30°W–0°) includes the Barents Sea, a transition area where the warm Atlantic water moves toward the Arctic Ocean, region 2 (68°–80°N, 10°–60°E) includes northwestern Atlantic, which consists of the subpolar gyre, and region 3 (58°–72°N, 170°E–160°W) representing the Bering Sea and the Chukchi Sea. Figures 7a–f show the evolution of the averaged SST in the three selected regions for 2015 (the training data) and 2018 (the testing data). For 2015, the CFSv2 predicts warmer SST than the observation from early July to early October, with the largest warm bias in late August (∼3°C), for all three regions. Then the CFSv2 forecast approaches the OISST until the late November. After that, the CFSv2 predicted SST shifts to cold bias for region 1 and 2 but has warm bias for region 3. This is also true for 2018. Encouragingly, the corrected SST by the ensemble-based ANN1 is in good agreement with the observation by reducing the aforementioned time varying SST biases, even though it shows cold bias in corrected CFSv2 SST from September to October in 2018 for region 3, and opposite (but comparable) improvements from September to October in 2018 for region 1.
b. Three-hidden-layer neural network
Previous studies suggested that with multiple hidden layers, ANN3 may have better performance on identification of abstract and vital input features and the reduction of irrelevant information (Bengio 2009; Chi and Kim 2017; LeCun et al. 2015). As shown in Fig. 2e (the training data) and Fig. 2f (the testing data), for ANN3, the RMSE decreases quickly with the increasing number of neurons and tends to approach an equilibrium as the number of neurons increases to 10, and the training time of it is less than half of 15 neurons. Thus, we choose [10, 10, 10] neurons to develop ANN3 since its training time is half of that using [15, 15, 15] neurons. Again, we run ANN3 20 times with random parameters assigned initially and train by randomly sampling. As shown in Figs. 2g and 2h, the RMSEs of 20 ANN3s vary from 0.93° to 1.1°C associated with weight and bias, which is smaller than that of the ensemble-based ANN1 as discussed above (Figs. 2c,d). The ensemble average has the RMSE of 0.96°C (1.0°C) for the training (testing) data, which are smaller than all 20 ANN3s. Thus, the performance of the ensemble-based ANN3 is relatively better than that of the ensemble-based ANN1 for both training and testing data.
Like ANN1, the spatial distribution of the SST bias corrected by the best and worst ANN3 is not entirely consistent in some regions, though relatively better than those of ANN1. The ensemble averaging of 20 ANN3s effectively reduces the sensitive dependence on initial parameters, which is also reflected by its consistency with the ensemble averaging of 10 ANN3s (Figs. 4g,i).
As shown by the SST bias distribution at different forecast days (Figs. 5e–h), the ensemble-based ANN3 remarkably reduce 1) the warm (cold) biases extending from the midlatitude of the central Atlantic to the Barents–Kara Sea and along the east coast of Asia and the west coast of North America to the Pacific sector of the Arctic (in the subtropics and midlatitude of western Atlantic and western and central Pacific) on day 61 for 2015 and 2018 (Figs. 5e,g), and 2) the cold biases in much of the Pacific and the subtropics and midlatitude of the western Atlantic on day 151 for 2015 and 2018 (Figs. 5f,h). Compared to the ensemble-based ANN1 (Figs. 5a–d), the ensemble-based ANN3 has relatively better performance in correcting the bias of the CFSv2 forecast SST, especially the magnitude.
Moreover, the ensemble-based ANN3 better corrects the time varying SST bias relative to the ensemble-based ANN1; i.e., the ensemble-based ANN3 shows better agreement with the observed SST after late November in region 3 for both 2015 and 2018 (Figs. 7g–l). We further compare the evolution of the SST bias correction between the ensemble averaging of 20 ANN1s and ANN3s in time for two more regions in the subtropics and midlatitude of Atlantic and Pacific Oceans (Fig. 6), including region 4 (30°–45°N, 120°E–180°) and region 5 (30°–45°N, 45°W°–0°). As shown in Fig. 8, apparently, the ensemble-based ANN3 has smaller error than that of the ensemble-based ANN1 for the two regions, especially in December and following January.
Figure 9 shows the scatterplot between the OISST and the ensemble-based ANN1 and ANN3 SST on day 50 (19 August 2018) and day 180 (27 December 2018) for the testing data. Both the ensemble-based ANN1 and ANN3 SST show better agreement with the observations compared to that of the CFSv2 forecast SST in five subregions, which is reflected by the correlation coefficients. The scatter markers of the ensemble-based ANN3 and OISST are relatively more concentrated near the regression line compared to those of ANN1 for some subregions (i.e., Fig. 9c3 versus Fig. 9d3) and region 4 (Figs. 9c4,d4), which is also reflected by the correlation coefficients.
4. Conclusions
In this study, we investigate whether deep learning models have the capability to correct the nonstationary SST bias in a coupled climate prediction model by constructing ANN1 and ANN3 models. Our results demonstrate that the ensemble-based neural networks can reduce the uncertainty associated with the parameters assigned initially and dependence on random sampling compared to only one neural network. Both the ensemble-based ANN1 with 15 neurons and ensemble-based ANN3 with [10, 10, 10] neurons in three hidden layers can effectively reduce the bias of the CFSv2 forecast SST both spatially and temporally. With multiple hidden layers, the ensembled-based ANN3 shows relatively better agreement with the observation than that of the ensembled-based ANN1 for both training and testing data, i.e., smaller bias in the subtropics and midlatitude of Atlantic and Pacific. However, this study is conducted for in the forecast SST by CFSv2 at a fixed initial time (1200 UTC 1 July). We will extend the analysis to include different initial times in future research.
Since there is no large difference in the time cost to train the ensemble-based ANN1 and ANN3, i.e., ∼10–14 h to train ANN1 (20 members) with 15 neurons and ∼12–16 h to train ANN3 (20 members) with [10, 10, 10] neurons on the same computing cluster, our study suggests that ensemble-based three-hidden-layer neural network is a useful tool for correcting the forecast variables by global climate models, which provides valuable information for many aspects of seasonal climate prediction. As discussed previously, RCMs that are used to assess more spatially detailed information at local to regional scales are driven by initial and boundary conditions obtained from simulations and predictions of global climate models. Ensemble three-hidden-layer neural network can be used to correct the bias in initial and boundary conditions, which can improve RCMs’ assessment of present climate simulations and future change projections.
Acknowledgments.
This research is supported by the National Key R&D Program of China (2018YFA0605901) and the National Natural Science Foundation of China (42006188).
Data availability statement.
All the data analyzed here are openly available. NOAA OISST V2 data are provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, at https://psl.noaa.gov/data/gridded/data.noaa.oisst.v2.html. The NCEP CFSv2 data are publicly available from the NCEP website at https://cfs.ncep.noaa.gov/.
REFERENCES
Abhilash, S., A. K. Sahai, S. Pattnaik, B. N. Goswami, and A. Kumar, 2014: Extended range prediction of active-break spells of Indian summer monsoon rainfall using an ensemble prediction system in NCEP Climate Forecast System. Int. J. Climatol., 34, 98–113, https://doi.org/10.1002/joc.3668.
Bengio, Y., 2009: Learning deep architectures for AI. Mach. Learn., 2, 1–127, https://doi.org/10.1561/2200000006.
Bo, Z., X. Liu, W. Gu, A. Huang, Y. Fang, T. Wu, W. Jie, and Q. Li, 2020: Impacts of atmospheric and oceanic initial conditions on boreal summer intraseasonal oscillation forecast in the BCC model. Theor. Appl. Climatol., 142, 393–406, https://doi.org/10.1007/s00704-020-03312-2.
Bourassa, M. A., and Coauthors, 2013: High-latitude ocean and sea ice surface fluxes: Challenges for climate research. Bull. Amer. Meteor. Soc., 94, 403–423, https://doi.org/10.1175/BAMS-D-11-00244.1.
Chi, J., and H. Kim, 2017: Prediction of Arctic sea ice concentration using a fully data driven deep neural network. Remote Sens., 9, 1305, https://doi.org/10.3390/rs9121305.
Chokkavarapu, N., and V. R. Mandla, 2019: Comparative study of GCMs, RCMs, downscaling and hydrological models: A review toward future climate change impact estimation. SN Appl. Sci., 1, 1698, https://doi.org/10.1007/s42452-019-1764-x.
Collins, M., and M. R. Allen, 2002: Assessing the relative roles of initial and boundary conditions in interannual to decadal climate predictability. J. Climate, 15, 3104–3109, https://doi.org/10.1175/1520-0442(2002)015<3104:ATRROI>2.0.CO;2.
Eyring, V., S. Bony, G. A. Meehl, C. A. Senior, B. Stevens, R. J. Stouffer, and K. E. Taylor, 2016: Overview of the Coupled Model Intercomparison Project phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev., 9, 1937–1958, https://doi.org/10.5194/gmd-9-1937-2016.
Hou, Z., J. Li, and B. Zuo, 2021: Correction of monthly SST forecasts in CFSv2 using the local dynamical analog method. Wea. Forecasting, 36, 843–858, https://doi.org/10.1175/WAF-D-20-0123.1.
Johnson, S. J., and Coauthors, 2019: SEAS5: The new ECMWF seasonal forecast system. Geosci. Model Dev., 12, 1087–1117, https://doi.org/10.5194/gmd-12-1087-2019.
Knutti, R., R. Furrer, C. Tebaldi, J. Cermak, and G. A. Meehl, 2010: Challenges in combining projections from multiple climate models. J. Climate, 23, 2739–2758, https://doi.org/10.1175/2009JCLI3361.1.
LeCun, Y., Y. Bengio, and G. Hinton, 2015: Deep learning. Nature, 521, 436–444, https://doi.org/10.1038/nature14539.
Liu, J., Y. Zhang, X. Cheng, and Y. Hu, 2019: Retrieval of snow depth over Arctic sea ice using a deep neural network. Remote Sens., 11, 2864, https://doi.org/10.3390/rs11232864.
Liu, Y., and H.-L. Ren, 2017: Improving ENSO prediction in CFSv2 with an analogue-based correction method. Int. J. Climatol., 37, 5035–5046, https://doi.org/10.1002/joc.5142.
Maraun, D., and Coauthors, 2010: Precipitation downscaling under climate change: Recent developments to bridge the gap between dynamical models and the end user. Rev. Geophys., 48, RG3003, https://doi.org/10.1029/2009RG000314.
Meehl, G. A., and Coauthors, 2021: Initialized Earth system prediction from subseasonal to decadal timescales. Nat. Rev. Earth Environ., 2, 340–357, https://doi.org/10.1038/s43017-021-00155-x.
Moalafhi, D. B., J. P. Evans, and A. Sharma, 2016: Evaluating global reanalysis datasets for provision of boundary conditions in regional climate modelling. Climate Dyn., 47, 2727–2745, https://doi.org/10.1007/s00382-016-2994-x.
Moalafhi, D. B., J. P. Evans, and A. Sharma, 2017: Influence of reanalysis datasets on dynamically downscaling the recent past. Climate Dyn., 49, 1239–1255, https://doi.org/10.1007/s00382-016-3378-y.
Pielke, R. A., Sr., and R. L. Wilby, 2012: Regional climate downscaling: What’s the point? Eos, Trans. Amer. Geophys. Union, 93, 52–53, https://doi.org/10.1029/2012EO050008.
Reichstein, M., G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalhais, and Prabhat, 2019: Deep learning and process understanding for data-driven Earth system science. Nature, 566, 195–204, https://doi.org/10.1038/s41586-019-0912-1.
Reynolds, R. W., N. A. Rayner, T. M. Smith, D. C. Stokes, and W. Wang, 2002: An improved in situ and satellite SST analysis for climate. J. Climate, 15, 1609–1625, https://doi.org/10.1175/1520-0442(2002)015<1609:AIISAS>2.0.CO;2.
Robertson, A. W., F. Vitart, and S. J. Camargo, 2020: Subseasonal to seasonal prediction of weather to climate with application to tropical cyclones. J. Geophys. Res. Atmos., 125, e2018JD029375, https://doi.org/10.1029/2018JD029375.
Rockel, B., 2015: The regional downscaling approach: A brief history and recent advances. Curr. Climate Change Rep., 1, 22–29, https://doi.org/10.1007/s40641-014-0001-3.
Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 2185–2208, https://doi.org/10.1175/JCLI-D-12-00823.1.
Sangelantoni, L., A. Russo, and F. Gennaretti, 2019: Impact of bias correction and downscaling through quantile mapping on simulated climate change signal: A case study over central Italy. Theor. Appl. Climatol., 135, 725–740, https://doi.org/10.1007/s00704-018-2406-8.
Sarkar, P. P., P. Janardhan, and P. Roy, 2020: Prediction of sea surface temperatures using deep learning neural networks. SN Appl. Sci., 2, 1458, https://doi.org/10.1007/s42452-020-03239-3.
Tang, J., X. Niu, S. Wang, H. Gao, X. Wang, and J. Wu, 2016: Statistical downscaling and dynamical downscaling of regional climate in China: Present climate evaluations and future climate projections. J. Geophys. Res. Atmos., 121, 2110–2129, https://doi.org/10.1002/2015JD023977.
Tao, Y., X. Gao, K. Hsu, S. Sorooshian, and A. Ihler, 2016: A deep neural network modeling framework to reduce bias in satellite precipitation products. J. Hydrometeor., 17, 931–945, https://doi.org/10.1175/JHM-D-15-0075.1.
Tietsche, S., M. Balmaseda, H. Zuo, C. Roberts, M. Mayer, and L. Ferranti, 2020: The importance of North Atlantic Ocean transports for seasonal forecasts. Climate Dyn., 55, 1995–2011, https://doi.org/10.1007/s00382-020-05364-6.
Watt-Meyer, O., N. D. Brenowitz, S. K. Clark, B. Henn, A. Kwa, J. McGibbon, W. A. Perkins, and C. S. Bretherton, 2021: Correcting weather and climate models by machine learning nudged historical simulations. Geophys. Res. Lett., 48, e2021GL092555, https://doi.org/10.1029/2021GL092555.
Wu, A., W. W. Hsieh, and A. Shabbar, 2005: The nonlinear patterns of North American winter temperature and precipitation associated with ENSO. J. Climate, 18, 1736–1752, https://doi.org/10.1175/JCLI3372.1.
Xiao, C., N. Chen, C. Hu, K. Wang, J. Gong, and Z. Chen, 2019: Short and mid-term sea surface temperature prediction using time-series satellite data and LSTM-AdaBoost combination approach. Remote Sens. Environ., 233, 111358, https://doi.org/10.1016/j.rse.2019.111358.
Yang, X., S. Yang, M. L. Tan, H. Pan, H. Zhang, G. Wang, R. He, and Z. Wang, 2022: Correcting the bias of daily satellite precipitation estimates in tropical regions using deep neural network. J. Hydrol., 608, 127656, https://doi.org/10.1016/j.jhydrol.2022.127656.
Zhang, L., Y. Xu, C. Meng, X. Li, H. Liu, and C. Wang, 2020: Comparison of statistical and dynamic downscaling techniques in generating high-resolution temperatures in China from CMIP5 GCMs. J. Appl. Meteor. Climatol., 59, 207–235, https://doi.org/10.1175/JAMC-D-19-0048.1.
Zhou, X., G. Huang, X. Wang, Y. Fan, and G. Cheng, 2018: A coupled dynamical-copula downscaling approach for temperature projections over the Canadian prairies. Climate Dyn., 51, 2413–2431, https://doi.org/10.1007/s00382-017-4020-3.