Correcting Nonstationary Sea Surface Temperature Bias in NCEP CFSv2 Using Ensemble-Based Neural Networks

Ziying Yang aState Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics, Institute of Atmospheric Physics, Chinese Academy of Sciences, Beijing, China
bUniversity of Chinese Academy of Sciences, Beijing, China

Search for other papers by Ziying Yang in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0003-1694-2169
,
Jiping Liu cSchool of Atmospheric Sciences, Sun Yat-sen University, Zhuhai, China

Search for other papers by Jiping Liu in
Current site
Google Scholar
PubMed
Close
,
Chao-Yuan Yang dSouthern Marine Science and Engineering Guangdong Laboratory (Zhuhai), Zhuhai, China

Search for other papers by Chao-Yuan Yang in
Current site
Google Scholar
PubMed
Close
, and
Yongyun Hu eDepartment of Atmospheric and Oceanic Sciences, School of Physics, Peking University, Beijing, China

Search for other papers by Yongyun Hu in
Current site
Google Scholar
PubMed
Close
Free access

Abstract

Sea surface temperature (SST) forecast products from the NCEP Climate Forecast System (CFSv2) that are widely used in climate research and prediction have nonstationary bias. In this study, we develop single- (ANN1) and three-hidden-layer (ANN3) neural networks and examine their ability to correct the SST bias in the NCEP CFSv2 extended seasonal forecast starting from July in the extratropical Northern Hemisphere. Our results show that the ensemble-based ANN1 and ANN3 can reduce the uncertainty associated with parameters assigned initially and dependence on random sampling. Overall, ANN1 reduces RMSE of the CFSv2 forecast SST substantially by 0.35°C (0.34°C) for the testing (training) data and ANN3 further reduces RMSE relatively by 0.49°C (0.47°C). Both the ensemble-based ANN1 and ANN3 can significantly reduce the spatially and temporally varying bias of the CFSv2 forecast SST in the Pacific and Atlantic Oceans, and ANN3 shows better agreement with the observation than that of ANN1 in some subregions.

Significance Statement

Global coupled climate models are the primary tool for climate simulation and prediction and provide initial and boundary conditions to drive regional climate models. SST is an essential climate variable simulated and forecast by global climate models, which suffers substantial biases both spatially and temporally. We apply the ensemble averaging of both single- and three-hidden-layer neural networks on the NCEP CFSv2 SST forecast. They can correct the identified SST error, though ANN3 performs relatively better than that of ANN1. Thus, ensemble-based ANN3 is a useful SST bias correction approach.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Jiping Liu, liujp63@mail.sysu.edu.cn

Abstract

Sea surface temperature (SST) forecast products from the NCEP Climate Forecast System (CFSv2) that are widely used in climate research and prediction have nonstationary bias. In this study, we develop single- (ANN1) and three-hidden-layer (ANN3) neural networks and examine their ability to correct the SST bias in the NCEP CFSv2 extended seasonal forecast starting from July in the extratropical Northern Hemisphere. Our results show that the ensemble-based ANN1 and ANN3 can reduce the uncertainty associated with parameters assigned initially and dependence on random sampling. Overall, ANN1 reduces RMSE of the CFSv2 forecast SST substantially by 0.35°C (0.34°C) for the testing (training) data and ANN3 further reduces RMSE relatively by 0.49°C (0.47°C). Both the ensemble-based ANN1 and ANN3 can significantly reduce the spatially and temporally varying bias of the CFSv2 forecast SST in the Pacific and Atlantic Oceans, and ANN3 shows better agreement with the observation than that of ANN1 in some subregions.

Significance Statement

Global coupled climate models are the primary tool for climate simulation and prediction and provide initial and boundary conditions to drive regional climate models. SST is an essential climate variable simulated and forecast by global climate models, which suffers substantial biases both spatially and temporally. We apply the ensemble averaging of both single- and three-hidden-layer neural networks on the NCEP CFSv2 SST forecast. They can correct the identified SST error, though ANN3 performs relatively better than that of ANN1. Thus, ensemble-based ANN3 is a useful SST bias correction approach.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Jiping Liu, liujp63@mail.sysu.edu.cn

1. Introduction

Coupled global climate models are the primary tool for predicting how climate may change in the near and distance future, i.e., from seasonal to century time scales. Compared to observations, their simulations have systematic biases, which can be caused by a variety of factors (Zhang et al. 2020). Due to computational resource constraints, coarse resolution (∼1°) continues to be commonly used by coupled global climate models (i.e., CMIP6; Eyring et al. 2016) and many studies of ocean–atmosphere interactions. The coarse-resolution models are insufficient for resolving important physical processes at smaller scales, which limits their usefulness for regional climate prediction and climate impact assessments (Maraun et al. 2010; Watt-Meyer et al. 2021).

Dynamical downscaling using regional climate models (RCMs) can transfer global coarse simulations to finer regional or local scales of interest (Pielke and Wilby 2012; Zhou et al. 2018). RCMs have higher resolution and are driven by initial and boundary conditions obtained from simulations of global climate models, which enable us to assess more spatially detailed information at local to regional scales (Chokkavarapu and Mandla 2019; Rockel 2015). RCMs are also widely used to predict regional climate, which require the initial and boundary conditions supplied by global climate models (Maraun et al. 2010; Tang et al. 2016). To date, more than 60 RCMs have been developed and used to provide regional climate simulations and predictions worldwide (Sangelantoni et al. 2019). The accuracy of the RCMs’ downscaled simulations and climate predictions is strongly influenced by systematic biases of global climate models, which can be transferred to RCMs through the initial and boundary conditions. This affects the assessment of present climate simulations and future change projections (Collins and Allen 2002; Moalafhi et al. 2016, 2017). Thus, the bias correction for global climate model simulations is needed, rather than directly using them as the initial and boundary conditions of RCMs.

Sea surface temperature (SST) is an essential climate variable in ocean–atmosphere interactions, forcing the atmospheric variability. Air–sea turbulent fluxes of heat, moisture, and momentum establish the link between SST changes and atmospheric variability, providing mechanisms of ocean–atmosphere interactions (e.g., Bourassa et al. 2013). SST simulated by global climate models is used as the initial and boundary conditions for RCMs, which strongly influences the performance of RCMs, especially in areas with strong atmosphere–ocean interactions (Rockel 2015). For example, SST forecast products from the NCEP Climate Forecast System (CFSv2) and ECMWF’s long-range forecasting system (SEAS5) are widely used by RCMs for weather and climate predictions (Abhilash et al. 2014; Tietsche et al. 2020).

Saha et al. (2014) showed that the SST bias in CFSv2 is small in the tropical ocean, but large in mid- and high-latitude oceans, and the bias is significant in winter. Johnson et al. (2019) showed that SEAS5 has warm SST bias in the northern tropical ocean and different bias in the northern extratropical ocean. The error of the forecast SST at several months lead times by global climate models varies with location and time. Thus, there is large uncertainty caused by directly using the forecast SST from global climate models as the input of RCM, which might have profound impacts on the local to regional simulations. Tietsche et al. (2020) presented evidences that the forecast DJF SST bias of SEAS5 in the subpolar North Atlantic Ocean can lead to the regional skill degradation and wide influence on atmospheric forecast. The subseasonal-to-seasonal forecast is strongly influenced by both initial and boundary SST conditions, with the latter becoming increasingly more important than the former as the forecast period lengthens (Bo et al. 2020; Meehl et al. 2021; Robertson et al. 2020).

Correcting the simulated or predicted SST bias is challenging. Some studies corrected the SST bias using simple arithmetic mean or combining weighted multimodels, which showed some effects but were not effective to correct extreme fluctuations (e.g., Knutti et al. 2010). Recently, Liu and Ren (2017) developed an analog-based correction method to correct CFSv2 SST empirically using historical forecast errors, which improves the predictive skill of El Niño–Southern Oscillation (ENSO). Hou et al. (2021) used a local dynamical analog algorithm to correct CFSv2 SST, which showed some positive effects on the predictive skill in some regions. However, these correction methods were mainly based on linear theory, which were not effective for correcting nonlinear biases in the simulated or predicted SST.

Significant progress has been made to nonlinear technology in the past decade, such as utilizing artificial neural networks, which can help better deal with the existence of nonlinearity and nonstationary characteristics in climate studies (LeCun et al. 2015; Reichstein et al. 2019). For example, neural networks have been successfully applied to short-term weather and climate predictions (Sarkar et al. 2020; Wu et al. 2005; Xiao et al. 2019) and to reduce biases in satellite prediction products (Tao et al. 2016; Yang et al. 2022).

The goal of this paper is to examine to what extent neural networks can correct the nonstationary SST bias in the NCEP CFSv2 seasonal forecast compared to the observation. Specifically, we construct single- and three-hidden-layer neural networks as well as their ensemble averaging and compare their effectiveness in reducing spatial and temporal SST bias in the extratropical ocean in the CFSv2 seasonal forecast.

2. Data and methods

In this study, SST from the NCEP CFSv2 (Saha et al. 2014) operational 9-month forecast is used. The CFSv2 forecast runs are initialized from 0000, 0600, 1200, and 1800 UTC for each initial day, which provides valuable real-time data for many aspects of seasonal climate prediction. Here we focus on the CFSv2 forecast initiated at 1200 UTC from 1 July to 31 January of the following year for each year from 2011 to 2018 (i.e., for 2011, it is from 1 July 2011 to 31 January 2012). The CFSv2 forecast SST has a spatial resolution of 1° and a temporal resolution of 6 h. The 1/4° daily NOAA Optimum Interpolation Sea Surface Temperature version 2 (OISST; Reynolds et al. 2002) is also utilized, which is interpolated to the CFSv2 1° grid cell. Hereafter, we refer to OISST as the observation. In this study, we focus on the SST bias correction for the CFSv2 forecast in the mid- and high latitudes of the Northern Hemisphere.

In this study, we examine two types of neural networks using the MATLAB deep learning toolbox to correct the nonstationary SST bias in the CFSv2 forecast. One is a single-hidden-layer neural network (hereafter referred to as ANN1), and the other is a three-hidden-layer neural network (hereafter referred to as ANN3). Figure 1 shows the topological configuration of ANN1 and ANN3. A fully connected neural network includes one input layer, one or several hidden layers, and one output layer. Here we use the CFSv2 forecast SST, and its latitude, longitude, and lead time as the four features, represented by input vector xi (i = 1, 2, 3, 4). The output y is the corrected CFSv2 SST obtained from ANN1 or ANN3 by mapping the input features xi to the output y through the nonlinear function: y=f*(x;W,b). The parameters, including weight (w) and bias (b), are initially randomly assigned. The loss function used here is the mean-square error (MSE) between the output y(corrected CFSv2 SST) and the target y^ (observed SST, OISST):
MSE=(y^y)2=[y^f*(x;W,b)]2.
The two neural networks are trained with Levenberg–Marquardt backpropagation algorithm (LM algorithm) to solve the solution of the parameters that minimizes the loss function. The nonlinear activation function calculated in the single-hidden-layer neurons is a1 = ϕ(W1x + b1) and in three-hidden-layer neurons are a1 = ϕ(W1x + b1), a2 = ϕ(W2a1 + b2), and a3 = ϕ(W3a2 + b3), where ai is the output of that hidden layer and the input of the next layer, and ϕ(z)=1/(1+ez) is the sigmoid function. Thus, ANN3 has the ability to learn more complex nonlinear relationship compared to ANN1.
Fig. 1.
Fig. 1.

Structure of the developed single- (upper panel in hidden layers) and three- (lower panel in hidden layers) hidden-layer neural networks. The networks have four input features xi, CFSv2 SST, latitude, longitude, and lead time, which are nonlinearly mapped to the output y (corrected CFSv2 SST). The parameters, including weight (w) and bias (b), are repeatedly updated by the Levenberg–Marquardt backpropagation algorithm that are optimized to minimize the loss function MSE=(y^y)2, where y^ is the observed SST (OISST).

Citation: Journal of Atmospheric and Oceanic Technology 40, 8; 10.1175/JTECH-D-22-0066.1

Here we divide the CFSv2 forecast SST into two groups: the data from 2011 to 2017 are used as the training data, whereas the data in 2018 are used as the testing data that do not participate in the training of the neural networks. For the training, both ANN1 and ANN3 randomly divide the data during 2011–17 into training sample, validation sample, and testing sample (three subsets have no overlap). Their corresponding ratios are 70%, 15%, and 15%. The training sample helps find the relationship between the input and the target; the validation sample helps evaluate whether the network finds the optimal relationship from the training sample and adjusts the parameters to avoid overfitting. The testing data are used to further evaluate the performance of the network as the independent sample.

To determine the optimal number of neurons for ANN1 and ANN3, we test 1, 5, 10, 15, 20, 25, and 30, neurons for ANN1, and [1, 1, 1], [5, 5, 5], [10, 10, 10], [15, 15, 15], [20, 20, 20], [25, 25, 25], and [30, 30, 30] neurons for ANN3. The correlation coefficient (R) and root-mean-square error (RMSE) between the output of ANN1 or ANN3 and observed SST are used to determine the performance of these neural networks.

3. Results

a. Single-hidden-layer neural network

As shown in Fig. 2a, the evaluation based on the test sample of ANN1 suggests that the RMSE for training data decreases with increasing number of neurons, but it tends to level off as the number of neurons is greater than 15. Meanwhile, the correlation coefficient between the output of ANN1 or ANN3 and observed SST (hereafter referred to as R) increases with increasing number of neurons and approaches to an equilibrium as the number of neurons is greater than 15 (Fig. 3a). This is also the case based on the independent testing data in 2018 (Figs. 2b and 3b). Thus, we choose 15 neurons to develop ANN1.

Fig. 2.
Fig. 2.

The RMSE of the (a)–(d) ANN1 and (e)–(h) ANN3 for (left) training data and (right) testing data, which are calculated between the output of ANN1 or ANN3 and observed SST. The numbers on the x-axis in (a), (b), (e), and (f) are different numbers of hidden-layer neurons of ANN1 and ANN3. (c),(d) and (g),(h) The RMSE of ANN1 and ANN3, respectively, with the optimal neurons in the hidden layer (15 neurons for ANN1 and [10, 10, 10] neurons for ANN3). The numbers on the x axis in (c), (d), (g), and (h) are neural network members 1–20 and the solid line represents the RMSE of the ensemble averaging of ANN1 and ANN3.

Citation: Journal of Atmospheric and Oceanic Technology 40, 8; 10.1175/JTECH-D-22-0066.1

Fig. 3.
Fig. 3.

As in Fig. 2, but for the correlation coefficient (R) of the ANN1 and ANN3, which are calculated between the output of ANN1 or ANN3 and observed SST.

Citation: Journal of Atmospheric and Oceanic Technology 40, 8; 10.1175/JTECH-D-22-0066.1

Previous studies suggested the neural network may be sensitive to network parameters such as initially assigned weight and bias (e.g., Liu et al. 2019). Here we run ANN1 (with 15 neurons) 20 times with random weights and biases assigned initially and train by randomly selected training sample from training data during 2011–17. The performance of each member in the ensemble-based ANN1 is shown in Figs. 2c and 2d. It appears that the RMSEs identified by the 20 ANN1s vary from 1.10° to 1.24°C associated with weight and bias and randomly sampling, though the R has minimal change (Figs. 3c,d). To obtain a robust network and reduce the sensitivity to the initial parameters associated with large size of SST data, we compute the ensemble averaging of the 20 ANN1s as the final output. The ensemble averaging of the 20 ANN3s has the RMSE of 1.09°C (1.14°C) for the training (testing) data, which are smaller than all 20 ANN1s. Note that the ensemble averaging also tends to reduce the effect of singular values.

Table 1 shows the RMSE between SST of CFSv2, ANN1, ANN3, and OISST for the testing data in 2018. The ensemble-based ANN1 reduces the RMSE between CFSv2 predicted SST and OISST by about 0.35°C for the testing data in the extratropical Northern Hemisphere. The ensemble-based ANN3 further reduces the RMSE relatively by about 0.49°C. This is also reflected for five selected regions in the Atlantic and Pacific. The RMSE between the CFSv2 forecast SST and OISST is decreased by 0.95°, 0.56°, 0.53°, 0.46°, and 0.29°C in ANN1 for region 1, 2, 3, 4, and 5, respectively, which is further decreased relatively by 0.99°, 0.58°, 0.67°, 0.76°, and 0.41°C in ANN3. Thus, ANN3 shows relatively better performance than ANN1 in correcting the bias of the CFSv2 forecast SST.

Table 1.

RMSE between SST of CFSv2, ANN1, ANN3, and OISST for the testing data in 2018.

Table 1.

As shown in Fig. 4 (left column), the spatial distribution of the SST bias corrected by the best and worst ANN1 are not entirely consistent; i.e., the best network shows that the corrected CFSv2 SST has warm bias in the subtropics and midlatitude of the central Pacific and Atlantic, whereas the worst network has opposite bias. Such discrepancy among individual network is largely due to random weights and biases assigned initially and the training sample (subset) of single neural network is randomly selected from the training data. The sensitivity to initial parameters value and the possible sampling dependence can be reduced with the ensemble averaging. The spatial distribution of the SST bias of the ensemble averaging of 20 ANN1s is in good agreement with that of 10 ANN1s (Figs. 4f,h).

Fig. 4.
Fig. 4.

Spatial distribution of SST bias on 1 Jul 2018. (a) Difference between the CFSv2 prediction and OISST. (b),(d),(f),(h) Difference between the output obtained from the best, worst, No. 10, and No. 20 members of ANN1 and CFSv2 predicted SST. (c),(e),(g),(i) As in (b), (d), (f), and (h), but for difference between the members of ANN3 and CFSv2 predicted SST.

Citation: Journal of Atmospheric and Oceanic Technology 40, 8; 10.1175/JTECH-D-22-0066.1

We examine the spatial distribution of the nonstationary SST bias corrected by the ensemble averaging of 20 ANN1s at different forecast times in 2015 (the training data) and in 2018 (the testing data). Figure 5 shows the SST difference between the CFSv2 forecast and OISST, between the ensemble-based ANN1 and CFSv2 forecast, and between the ensemble-based ANN1 and OISST on 30 August (day 61 of the forecast) and 28 November (day 151 of the forecast). For the forecast SST on day 61 for 2015 and 2018 (same date in summer but different years), CFSv2 shows large warm biases extending from the midlatitude of the central Atlantic to the Barents–Kara Sea and along the east coast of Asia (in 2015) or the west coast of North America (in 2018) to the Chukchi–Beaufort Sea, and large cold biases in the subtropics and midlatitude of western Atlantic and central Pacific. The ensemble-based ANN1 significantly reduces the warm biases in the above areas (Figs. 5a,c). For the forecast SST on day 151 for 2015 and 2018 (same date in winter but different years), the widespread large cold biases in much of the Pacific and the subtropics and midlatitude of the western Atlantic are significantly reduced (Figs. 5b,d).

Fig. 5.
Fig. 5.

SST error distribution at different forecast times in 2015 (30 Aug and 28 Nov 2015 for the training data) and in 2018 (30 Aug and 28 Nov 2018 for the testing data). (a1)–(d1) Differences between the CFSv2 forecast and OISST. (a2)–(d2) Differences between the ensemble-based ANN1 and CFSv2 and (e1)–(h1) differences between the ensemble-based ANN3 and CFSv2. (a3)–(d3) Differences between the ensemble-based ANN1 and OISST and (e2)–(h2) differences between the ensemble-based ANN3 and OISST.

Citation: Journal of Atmospheric and Oceanic Technology 40, 8; 10.1175/JTECH-D-22-0066.1

Next, we examine the evolution of the nonstationary SST bias corrected by the ensemble averaging of 20 ANN1s over time. Here we select three regions in the high latitudes of the Atlantic and Pacific (Fig. 6), including region 1 (60°–80°N, 30°W–0°) includes the Barents Sea, a transition area where the warm Atlantic water moves toward the Arctic Ocean, region 2 (68°–80°N, 10°–60°E) includes northwestern Atlantic, which consists of the subpolar gyre, and region 3 (58°–72°N, 170°E–160°W) representing the Bering Sea and the Chukchi Sea. Figures 7a–f show the evolution of the averaged SST in the three selected regions for 2015 (the training data) and 2018 (the testing data). For 2015, the CFSv2 predicts warmer SST than the observation from early July to early October, with the largest warm bias in late August (∼3°C), for all three regions. Then the CFSv2 forecast approaches the OISST until the late November. After that, the CFSv2 predicted SST shifts to cold bias for region 1 and 2 but has warm bias for region 3. This is also true for 2018. Encouragingly, the corrected SST by the ensemble-based ANN1 is in good agreement with the observation by reducing the aforementioned time varying SST biases, even though it shows cold bias in corrected CFSv2 SST from September to October in 2018 for region 3, and opposite (but comparable) improvements from September to October in 2018 for region 1.

Fig. 6.
Fig. 6.

Five selected regions in the Atlantic and Pacific Oceans in this study. The black boxes indicate the three high-latitude regions: region 1 (68°–80°N, 30°W–0°) includes the Barents Sea, region 2 (68°–80°N, 10°–60°E) includes northwestern Atlantic, and region 3 (58°–72°N, 170°E–160°W) includes the Bering Sea and the Chukchi Sea. The red boxes indicate the two subtropical and midlatitude regions: region 4 (30°–45°N, 120°E–180°), region 5 (30°–45°N, 45°W–0°).

Citation: Journal of Atmospheric and Oceanic Technology 40, 8; 10.1175/JTECH-D-22-0066.1

Fig. 7.
Fig. 7.

Regional averaging daily SST from the initial forecast time to day 215 for CFSv2 (red line), OISST (blue line), and ensemble of 20 (a)–(f) ANN1s (black line) and (g)–(l) ANN3s (black line) in 2015 (the training data) and in 2018 (the testing data). (a),(d),(g),(j) Region 1 (60°–80°N, 30°W–0°), (b),(e),(h),(k) region 2 (68°–80°N, 10°–60°E), and (c),(f),(i),(l) region 3 (58°–72°N, 170°E–160°W).

Citation: Journal of Atmospheric and Oceanic Technology 40, 8; 10.1175/JTECH-D-22-0066.1

b. Three-hidden-layer neural network

Previous studies suggested that with multiple hidden layers, ANN3 may have better performance on identification of abstract and vital input features and the reduction of irrelevant information (Bengio 2009; Chi and Kim 2017; LeCun et al. 2015). As shown in Fig. 2e (the training data) and Fig. 2f (the testing data), for ANN3, the RMSE decreases quickly with the increasing number of neurons and tends to approach an equilibrium as the number of neurons increases to 10, and the training time of it is less than half of 15 neurons. Thus, we choose [10, 10, 10] neurons to develop ANN3 since its training time is half of that using [15, 15, 15] neurons. Again, we run ANN3 20 times with random parameters assigned initially and train by randomly sampling. As shown in Figs. 2g and 2h, the RMSEs of 20 ANN3s vary from 0.93° to 1.1°C associated with weight and bias, which is smaller than that of the ensemble-based ANN1 as discussed above (Figs. 2c,d). The ensemble average has the RMSE of 0.96°C (1.0°C) for the training (testing) data, which are smaller than all 20 ANN3s. Thus, the performance of the ensemble-based ANN3 is relatively better than that of the ensemble-based ANN1 for both training and testing data.

Like ANN1, the spatial distribution of the SST bias corrected by the best and worst ANN3 is not entirely consistent in some regions, though relatively better than those of ANN1. The ensemble averaging of 20 ANN3s effectively reduces the sensitive dependence on initial parameters, which is also reflected by its consistency with the ensemble averaging of 10 ANN3s (Figs. 4g,i).

As shown by the SST bias distribution at different forecast days (Figs. 5e–h), the ensemble-based ANN3 remarkably reduce 1) the warm (cold) biases extending from the midlatitude of the central Atlantic to the Barents–Kara Sea and along the east coast of Asia and the west coast of North America to the Pacific sector of the Arctic (in the subtropics and midlatitude of western Atlantic and western and central Pacific) on day 61 for 2015 and 2018 (Figs. 5e,g), and 2) the cold biases in much of the Pacific and the subtropics and midlatitude of the western Atlantic on day 151 for 2015 and 2018 (Figs. 5f,h). Compared to the ensemble-based ANN1 (Figs. 5a–d), the ensemble-based ANN3 has relatively better performance in correcting the bias of the CFSv2 forecast SST, especially the magnitude.

Moreover, the ensemble-based ANN3 better corrects the time varying SST bias relative to the ensemble-based ANN1; i.e., the ensemble-based ANN3 shows better agreement with the observed SST after late November in region 3 for both 2015 and 2018 (Figs. 7g–l). We further compare the evolution of the SST bias correction between the ensemble averaging of 20 ANN1s and ANN3s in time for two more regions in the subtropics and midlatitude of Atlantic and Pacific Oceans (Fig. 6), including region 4 (30°–45°N, 120°E–180°) and region 5 (30°–45°N, 45°W°–0°). As shown in Fig. 8, apparently, the ensemble-based ANN3 has smaller error than that of the ensemble-based ANN1 for the two regions, especially in December and following January.

Fig. 8.
Fig. 8.

Regional averaging absolute error of daily SST between the CFSv2 and OISST (black line), the ensemble-based ANN1 (blue line) and ANN3 (red line), and OISST from the initial forecast time to day215, exemplified by training data in 2015 and testing data in 2018. (a),(c) Region 4 (30°–45°N, 128°W–180°) and (b),(d) region 5 (30°–45°N, 45°W–0°).

Citation: Journal of Atmospheric and Oceanic Technology 40, 8; 10.1175/JTECH-D-22-0066.1

Figure 9 shows the scatterplot between the OISST and the ensemble-based ANN1 and ANN3 SST on day 50 (19 August 2018) and day 180 (27 December 2018) for the testing data. Both the ensemble-based ANN1 and ANN3 SST show better agreement with the observations compared to that of the CFSv2 forecast SST in five subregions, which is reflected by the correlation coefficients. The scatter markers of the ensemble-based ANN3 and OISST are relatively more concentrated near the regression line compared to those of ANN1 for some subregions (i.e., Fig. 9c3 versus Fig. 9d3) and region 4 (Figs. 9c4,d4), which is also reflected by the correlation coefficients.

Fig. 9.
Fig. 9.

Scatterplot between the OISST and the ensemble-based (a1)–(a5),(c1)–(c5) ANN1 and (b1)–(b5),(d1)–(d5) ANN3 SST on (a1)–(b5) day 50 and (c1)–(d5) day 180 in 2018. Black dots are CFSv2 SST vs OISST, and red dots are ANN1 and ANN3 SST vs OISST. The dotted line represents the linear regression between the CFSv2 predicted SST and OISST. The solid line represents the linear regression between ANN1 or ANN3 SST and OISST.

Citation: Journal of Atmospheric and Oceanic Technology 40, 8; 10.1175/JTECH-D-22-0066.1

4. Conclusions

In this study, we investigate whether deep learning models have the capability to correct the nonstationary SST bias in a coupled climate prediction model by constructing ANN1 and ANN3 models. Our results demonstrate that the ensemble-based neural networks can reduce the uncertainty associated with the parameters assigned initially and dependence on random sampling compared to only one neural network. Both the ensemble-based ANN1 with 15 neurons and ensemble-based ANN3 with [10, 10, 10] neurons in three hidden layers can effectively reduce the bias of the CFSv2 forecast SST both spatially and temporally. With multiple hidden layers, the ensembled-based ANN3 shows relatively better agreement with the observation than that of the ensembled-based ANN1 for both training and testing data, i.e., smaller bias in the subtropics and midlatitude of Atlantic and Pacific. However, this study is conducted for in the forecast SST by CFSv2 at a fixed initial time (1200 UTC 1 July). We will extend the analysis to include different initial times in future research.

Since there is no large difference in the time cost to train the ensemble-based ANN1 and ANN3, i.e., ∼10–14 h to train ANN1 (20 members) with 15 neurons and ∼12–16 h to train ANN3 (20 members) with [10, 10, 10] neurons on the same computing cluster, our study suggests that ensemble-based three-hidden-layer neural network is a useful tool for correcting the forecast variables by global climate models, which provides valuable information for many aspects of seasonal climate prediction. As discussed previously, RCMs that are used to assess more spatially detailed information at local to regional scales are driven by initial and boundary conditions obtained from simulations and predictions of global climate models. Ensemble three-hidden-layer neural network can be used to correct the bias in initial and boundary conditions, which can improve RCMs’ assessment of present climate simulations and future change projections.

Acknowledgments.

This research is supported by the National Key R&D Program of China (2018YFA0605901) and the National Natural Science Foundation of China (42006188).

Data availability statement.

All the data analyzed here are openly available. NOAA OISST V2 data are provided by the NOAA/OAR/ESRL PSL, Boulder, Colorado, at https://psl.noaa.gov/data/gridded/data.noaa.oisst.v2.html. The NCEP CFSv2 data are publicly available from the NCEP website at https://cfs.ncep.noaa.gov/.

REFERENCES

  • Abhilash, S., A. K. Sahai, S. Pattnaik, B. N. Goswami, and A. Kumar, 2014: Extended range prediction of active-break spells of Indian summer monsoon rainfall using an ensemble prediction system in NCEP Climate Forecast System. Int. J. Climatol., 34, 98113, https://doi.org/10.1002/joc.3668.

    • Search Google Scholar
    • Export Citation
  • Bengio, Y., 2009: Learning deep architectures for AI. Mach. Learn., 2, 1127, https://doi.org/10.1561/2200000006.

  • Bo, Z., X. Liu, W. Gu, A. Huang, Y. Fang, T. Wu, W. Jie, and Q. Li, 2020: Impacts of atmospheric and oceanic initial conditions on boreal summer intraseasonal oscillation forecast in the BCC model. Theor. Appl. Climatol., 142, 393406, https://doi.org/10.1007/s00704-020-03312-2.

    • Search Google Scholar
    • Export Citation
  • Bourassa, M. A., and Coauthors, 2013: High-latitude ocean and sea ice surface fluxes: Challenges for climate research. Bull. Amer. Meteor. Soc., 94, 403423, https://doi.org/10.1175/BAMS-D-11-00244.1.

    • Search Google Scholar
    • Export Citation
  • Chi, J., and H. Kim, 2017: Prediction of Arctic sea ice concentration using a fully data driven deep neural network. Remote Sens., 9, 1305, https://doi.org/10.3390/rs9121305.

    • Search Google Scholar
    • Export Citation
  • Chokkavarapu, N., and V. R. Mandla, 2019: Comparative study of GCMs, RCMs, downscaling and hydrological models: A review toward future climate change impact estimation. SN Appl. Sci., 1, 1698, https://doi.org/10.1007/s42452-019-1764-x.

    • Search Google Scholar
    • Export Citation
  • Collins, M., and M. R. Allen, 2002: Assessing the relative roles of initial and boundary conditions in interannual to decadal climate predictability. J. Climate, 15, 31043109, https://doi.org/10.1175/1520-0442(2002)015<3104:ATRROI>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Eyring, V., S. Bony, G. A. Meehl, C. A. Senior, B. Stevens, R. J. Stouffer, and K. E. Taylor, 2016: Overview of the Coupled Model Intercomparison Project phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev., 9, 19371958, https://doi.org/10.5194/gmd-9-1937-2016.

    • Search Google Scholar
    • Export Citation
  • Hou, Z., J. Li, and B. Zuo, 2021: Correction of monthly SST forecasts in CFSv2 using the local dynamical analog method. Wea. Forecasting, 36, 843858, https://doi.org/10.1175/WAF-D-20-0123.1.

    • Search Google Scholar
    • Export Citation
  • Johnson, S. J., and Coauthors, 2019: SEAS5: The new ECMWF seasonal forecast system. Geosci. Model Dev., 12, 10871117, https://doi.org/10.5194/gmd-12-1087-2019.

    • Search Google Scholar
    • Export Citation
  • Knutti, R., R. Furrer, C. Tebaldi, J. Cermak, and G. A. Meehl, 2010: Challenges in combining projections from multiple climate models. J. Climate, 23, 27392758, https://doi.org/10.1175/2009JCLI3361.1.

    • Search Google Scholar
    • Export Citation
  • LeCun, Y., Y. Bengio, and G. Hinton, 2015: Deep learning. Nature, 521, 436444, https://doi.org/10.1038/nature14539.

  • Liu, J., Y. Zhang, X. Cheng, and Y. Hu, 2019: Retrieval of snow depth over Arctic sea ice using a deep neural network. Remote Sens., 11, 2864, https://doi.org/10.3390/rs11232864.

    • Search Google Scholar
    • Export Citation
  • Liu, Y., and H.-L. Ren, 2017: Improving ENSO prediction in CFSv2 with an analogue-based correction method. Int. J. Climatol., 37, 50355046, https://doi.org/10.1002/joc.5142.

    • Search Google Scholar
    • Export Citation
  • Maraun, D., and Coauthors, 2010: Precipitation downscaling under climate change: Recent developments to bridge the gap between dynamical models and the end user. Rev. Geophys., 48, RG3003, https://doi.org/10.1029/2009RG000314.

    • Search Google Scholar
    • Export Citation
  • Meehl, G. A., and Coauthors, 2021: Initialized Earth system prediction from subseasonal to decadal timescales. Nat. Rev. Earth Environ., 2, 340357, https://doi.org/10.1038/s43017-021-00155-x.

    • Search Google Scholar
    • Export Citation
  • Moalafhi, D. B., J. P. Evans, and A. Sharma, 2016: Evaluating global reanalysis datasets for provision of boundary conditions in regional climate modelling. Climate Dyn., 47, 27272745, https://doi.org/10.1007/s00382-016-2994-x.

    • Search Google Scholar
    • Export Citation
  • Moalafhi, D. B., J. P. Evans, and A. Sharma, 2017: Influence of reanalysis datasets on dynamically downscaling the recent past. Climate Dyn., 49, 12391255, https://doi.org/10.1007/s00382-016-3378-y.

    • Search Google Scholar
    • Export Citation
  • Pielke, R. A., Sr., and R. L. Wilby, 2012: Regional climate downscaling: What’s the point? Eos, Trans. Amer. Geophys. Union, 93, 5253, https://doi.org/10.1029/2012EO050008.

    • Search Google Scholar
    • Export Citation
  • Reichstein, M., G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalhais, and Prabhat, 2019: Deep learning and process understanding for data-driven Earth system science. Nature, 566, 195204, https://doi.org/10.1038/s41586-019-0912-1.

    • Search Google Scholar
    • Export Citation
  • Reynolds, R. W., N. A. Rayner, T. M. Smith, D. C. Stokes, and W. Wang, 2002: An improved in situ and satellite SST analysis for climate. J. Climate, 15, 16091625, https://doi.org/10.1175/1520-0442(2002)015<1609:AIISAS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Robertson, A. W., F. Vitart, and S. J. Camargo, 2020: Subseasonal to seasonal prediction of weather to climate with application to tropical cyclones. J. Geophys. Res. Atmos., 125, e2018JD029375, https://doi.org/10.1029/2018JD029375.

    • Search Google Scholar
    • Export Citation
  • Rockel, B., 2015: The regional downscaling approach: A brief history and recent advances. Curr. Climate Change Rep., 1, 2229, https://doi.org/10.1007/s40641-014-0001-3.

    • Search Google Scholar
    • Export Citation
  • Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 21852208, https://doi.org/10.1175/JCLI-D-12-00823.1.

    • Search Google Scholar
    • Export Citation
  • Sangelantoni, L., A. Russo, and F. Gennaretti, 2019: Impact of bias correction and downscaling through quantile mapping on simulated climate change signal: A case study over central Italy. Theor. Appl. Climatol., 135, 725740, https://doi.org/10.1007/s00704-018-2406-8.

    • Search Google Scholar
    • Export Citation
  • Sarkar, P. P., P. Janardhan, and P. Roy, 2020: Prediction of sea surface temperatures using deep learning neural networks. SN Appl. Sci., 2, 1458, https://doi.org/10.1007/s42452-020-03239-3.

    • Search Google Scholar
    • Export Citation
  • Tang, J., X. Niu, S. Wang, H. Gao, X. Wang, and J. Wu, 2016: Statistical downscaling and dynamical downscaling of regional climate in China: Present climate evaluations and future climate projections. J. Geophys. Res. Atmos., 121, 21102129, https://doi.org/10.1002/2015JD023977.

    • Search Google Scholar
    • Export Citation
  • Tao, Y., X. Gao, K. Hsu, S. Sorooshian, and A. Ihler, 2016: A deep neural network modeling framework to reduce bias in satellite precipitation products. J. Hydrometeor., 17, 931945, https://doi.org/10.1175/JHM-D-15-0075.1.

    • Search Google Scholar
    • Export Citation
  • Tietsche, S., M. Balmaseda, H. Zuo, C. Roberts, M. Mayer, and L. Ferranti, 2020: The importance of North Atlantic Ocean transports for seasonal forecasts. Climate Dyn., 55, 19952011, https://doi.org/10.1007/s00382-020-05364-6.

    • Search Google Scholar
    • Export Citation
  • Watt-Meyer, O., N. D. Brenowitz, S. K. Clark, B. Henn, A. Kwa, J. McGibbon, W. A. Perkins, and C. S. Bretherton, 2021: Correcting weather and climate models by machine learning nudged historical simulations. Geophys. Res. Lett., 48, e2021GL092555, https://doi.org/10.1029/2021GL092555.

    • Search Google Scholar
    • Export Citation
  • Wu, A., W. W. Hsieh, and A. Shabbar, 2005: The nonlinear patterns of North American winter temperature and precipitation associated with ENSO. J. Climate, 18, 17361752, https://doi.org/10.1175/JCLI3372.1.

    • Search Google Scholar
    • Export Citation
  • Xiao, C., N. Chen, C. Hu, K. Wang, J. Gong, and Z. Chen, 2019: Short and mid-term sea surface temperature prediction using time-series satellite data and LSTM-AdaBoost combination approach. Remote Sens. Environ., 233, 111358, https://doi.org/10.1016/j.rse.2019.111358.

    • Search Google Scholar
    • Export Citation
  • Yang, X., S. Yang, M. L. Tan, H. Pan, H. Zhang, G. Wang, R. He, and Z. Wang, 2022: Correcting the bias of daily satellite precipitation estimates in tropical regions using deep neural network. J. Hydrol., 608, 127656, https://doi.org/10.1016/j.jhydrol.2022.127656.

    • Search Google Scholar
    • Export Citation
  • Zhang, L., Y. Xu, C. Meng, X. Li, H. Liu, and C. Wang, 2020: Comparison of statistical and dynamic downscaling techniques in generating high-resolution temperatures in China from CMIP5 GCMs. J. Appl. Meteor. Climatol., 59, 207235, https://doi.org/10.1175/JAMC-D-19-0048.1.

    • Search Google Scholar
    • Export Citation
  • Zhou, X., G. Huang, X. Wang, Y. Fan, and G. Cheng, 2018: A coupled dynamical-copula downscaling approach for temperature projections over the Canadian prairies. Climate Dyn., 51, 24132431, https://doi.org/10.1007/s00382-017-4020-3.

    • Search Google Scholar
    • Export Citation
Save
  • Abhilash, S., A. K. Sahai, S. Pattnaik, B. N. Goswami, and A. Kumar, 2014: Extended range prediction of active-break spells of Indian summer monsoon rainfall using an ensemble prediction system in NCEP Climate Forecast System. Int. J. Climatol., 34, 98113, https://doi.org/10.1002/joc.3668.

    • Search Google Scholar
    • Export Citation
  • Bengio, Y., 2009: Learning deep architectures for AI. Mach. Learn., 2, 1127, https://doi.org/10.1561/2200000006.

  • Bo, Z., X. Liu, W. Gu, A. Huang, Y. Fang, T. Wu, W. Jie, and Q. Li, 2020: Impacts of atmospheric and oceanic initial conditions on boreal summer intraseasonal oscillation forecast in the BCC model. Theor. Appl. Climatol., 142, 393406, https://doi.org/10.1007/s00704-020-03312-2.

    • Search Google Scholar
    • Export Citation
  • Bourassa, M. A., and Coauthors, 2013: High-latitude ocean and sea ice surface fluxes: Challenges for climate research. Bull. Amer. Meteor. Soc., 94, 403423, https://doi.org/10.1175/BAMS-D-11-00244.1.

    • Search Google Scholar
    • Export Citation
  • Chi, J., and H. Kim, 2017: Prediction of Arctic sea ice concentration using a fully data driven deep neural network. Remote Sens., 9, 1305, https://doi.org/10.3390/rs9121305.

    • Search Google Scholar
    • Export Citation
  • Chokkavarapu, N., and V. R. Mandla, 2019: Comparative study of GCMs, RCMs, downscaling and hydrological models: A review toward future climate change impact estimation. SN Appl. Sci., 1, 1698, https://doi.org/10.1007/s42452-019-1764-x.

    • Search Google Scholar
    • Export Citation
  • Collins, M., and M. R. Allen, 2002: Assessing the relative roles of initial and boundary conditions in interannual to decadal climate predictability. J. Climate, 15, 31043109, https://doi.org/10.1175/1520-0442(2002)015<3104:ATRROI>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Eyring, V., S. Bony, G. A. Meehl, C. A. Senior, B. Stevens, R. J. Stouffer, and K. E. Taylor, 2016: Overview of the Coupled Model Intercomparison Project phase 6 (CMIP6) experimental design and organization. Geosci. Model Dev., 9, 19371958, https://doi.org/10.5194/gmd-9-1937-2016.

    • Search Google Scholar
    • Export Citation
  • Hou, Z., J. Li, and B. Zuo, 2021: Correction of monthly SST forecasts in CFSv2 using the local dynamical analog method. Wea. Forecasting, 36, 843858, https://doi.org/10.1175/WAF-D-20-0123.1.

    • Search Google Scholar
    • Export Citation
  • Johnson, S. J., and Coauthors, 2019: SEAS5: The new ECMWF seasonal forecast system. Geosci. Model Dev., 12, 10871117, https://doi.org/10.5194/gmd-12-1087-2019.

    • Search Google Scholar
    • Export Citation
  • Knutti, R., R. Furrer, C. Tebaldi, J. Cermak, and G. A. Meehl, 2010: Challenges in combining projections from multiple climate models. J. Climate, 23, 27392758, https://doi.org/10.1175/2009JCLI3361.1.

    • Search Google Scholar
    • Export Citation
  • LeCun, Y., Y. Bengio, and G. Hinton, 2015: Deep learning. Nature, 521, 436444, https://doi.org/10.1038/nature14539.

  • Liu, J., Y. Zhang, X. Cheng, and Y. Hu, 2019: Retrieval of snow depth over Arctic sea ice using a deep neural network. Remote Sens., 11, 2864, https://doi.org/10.3390/rs11232864.

    • Search Google Scholar
    • Export Citation
  • Liu, Y., and H.-L. Ren, 2017: Improving ENSO prediction in CFSv2 with an analogue-based correction method. Int. J. Climatol., 37, 50355046, https://doi.org/10.1002/joc.5142.

    • Search Google Scholar
    • Export Citation
  • Maraun, D., and Coauthors, 2010: Precipitation downscaling under climate change: Recent developments to bridge the gap between dynamical models and the end user. Rev. Geophys., 48, RG3003, https://doi.org/10.1029/2009RG000314.

    • Search Google Scholar
    • Export Citation
  • Meehl, G. A., and Coauthors, 2021: Initialized Earth system prediction from subseasonal to decadal timescales. Nat. Rev. Earth Environ., 2, 340357, https://doi.org/10.1038/s43017-021-00155-x.

    • Search Google Scholar
    • Export Citation
  • Moalafhi, D. B., J. P. Evans, and A. Sharma, 2016: Evaluating global reanalysis datasets for provision of boundary conditions in regional climate modelling. Climate Dyn., 47, 27272745, https://doi.org/10.1007/s00382-016-2994-x.

    • Search Google Scholar
    • Export Citation
  • Moalafhi, D. B., J. P. Evans, and A. Sharma, 2017: Influence of reanalysis datasets on dynamically downscaling the recent past. Climate Dyn., 49, 12391255, https://doi.org/10.1007/s00382-016-3378-y.

    • Search Google Scholar
    • Export Citation
  • Pielke, R. A., Sr., and R. L. Wilby, 2012: Regional climate downscaling: What’s the point? Eos, Trans. Amer. Geophys. Union, 93, 5253, https://doi.org/10.1029/2012EO050008.

    • Search Google Scholar
    • Export Citation
  • Reichstein, M., G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalhais, and Prabhat, 2019: Deep learning and process understanding for data-driven Earth system science. Nature, 566, 195204, https://doi.org/10.1038/s41586-019-0912-1.

    • Search Google Scholar
    • Export Citation
  • Reynolds, R. W., N. A. Rayner, T. M. Smith, D. C. Stokes, and W. Wang, 2002: An improved in situ and satellite SST analysis for climate. J. Climate, 15, 16091625, https://doi.org/10.1175/1520-0442(2002)015<1609:AIISAS>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Robertson, A. W., F. Vitart, and S. J. Camargo, 2020: Subseasonal to seasonal prediction of weather to climate with application to tropical cyclones. J. Geophys. Res. Atmos., 125, e2018JD029375, https://doi.org/10.1029/2018JD029375.

    • Search Google Scholar
    • Export Citation
  • Rockel, B., 2015: The regional downscaling approach: A brief history and recent advances. Curr. Climate Change Rep., 1, 2229, https://doi.org/10.1007/s40641-014-0001-3.

    • Search Google Scholar
    • Export Citation
  • Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 21852208, https://doi.org/10.1175/JCLI-D-12-00823.1.

    • Search Google Scholar
    • Export Citation
  • Sangelantoni, L., A. Russo, and F. Gennaretti, 2019: Impact of bias correction and downscaling through quantile mapping on simulated climate change signal: A case study over central Italy. Theor. Appl. Climatol., 135, 725740, https://doi.org/10.1007/s00704-018-2406-8.

    • Search Google Scholar
    • Export Citation
  • Sarkar, P. P., P. Janardhan, and P. Roy, 2020: Prediction of sea surface temperatures using deep learning neural networks. SN Appl. Sci., 2, 1458, https://doi.org/10.1007/s42452-020-03239-3.

    • Search Google Scholar
    • Export Citation
  • Tang, J., X. Niu, S. Wang, H. Gao, X. Wang, and J. Wu, 2016: Statistical downscaling and dynamical downscaling of regional climate in China: Present climate evaluations and future climate projections. J. Geophys. Res. Atmos., 121, 21102129, https://doi.org/10.1002/2015JD023977.

    • Search Google Scholar
    • Export Citation
  • Tao, Y., X. Gao, K. Hsu, S. Sorooshian, and A. Ihler, 2016: A deep neural network modeling framework to reduce bias in satellite precipitation products. J. Hydrometeor., 17, 931945, https://doi.org/10.1175/JHM-D-15-0075.1.

    • Search Google Scholar
    • Export Citation
  • Tietsche, S., M. Balmaseda, H. Zuo, C. Roberts, M. Mayer, and L. Ferranti, 2020: The importance of North Atlantic Ocean transports for seasonal forecasts. Climate Dyn., 55, 19952011, https://doi.org/10.1007/s00382-020-05364-6.

    • Search Google Scholar
    • Export Citation
  • Watt-Meyer, O., N. D. Brenowitz, S. K. Clark, B. Henn, A. Kwa, J. McGibbon, W. A. Perkins, and C. S. Bretherton, 2021: Correcting weather and climate models by machine learning nudged historical simulations. Geophys. Res. Lett., 48, e2021GL092555, https://doi.org/10.1029/2021GL092555.

    • Search Google Scholar
    • Export Citation
  • Wu, A., W. W. Hsieh, and A. Shabbar, 2005: The nonlinear patterns of North American winter temperature and precipitation associated with ENSO. J. Climate, 18, 17361752, https://doi.org/10.1175/JCLI3372.1.

    • Search Google Scholar
    • Export Citation
  • Xiao, C., N. Chen, C. Hu, K. Wang, J. Gong, and Z. Chen, 2019: Short and mid-term sea surface temperature prediction using time-series satellite data and LSTM-AdaBoost combination approach. Remote Sens. Environ., 233, 111358, https://doi.org/10.1016/j.rse.2019.111358.

    • Search Google Scholar
    • Export Citation
  • Yang, X., S. Yang, M. L. Tan, H. Pan, H. Zhang, G. Wang, R. He, and Z. Wang, 2022: Correcting the bias of daily satellite precipitation estimates in tropical regions using deep neural network. J. Hydrol., 608, 127656, https://doi.org/10.1016/j.jhydrol.2022.127656.

    • Search Google Scholar
    • Export Citation
  • Zhang, L., Y. Xu, C. Meng, X. Li, H. Liu, and C. Wang, 2020: Comparison of statistical and dynamic downscaling techniques in generating high-resolution temperatures in China from CMIP5 GCMs. J. Appl. Meteor. Climatol., 59, 207235, https://doi.org/10.1175/JAMC-D-19-0048.1.

    • Search Google Scholar
    • Export Citation
  • Zhou, X., G. Huang, X. Wang, Y. Fan, and G. Cheng, 2018: A coupled dynamical-copula downscaling approach for temperature projections over the Canadian prairies. Climate Dyn., 51, 24132431, https://doi.org/10.1007/s00382-017-4020-3.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Structure of the developed single- (upper panel in hidden layers) and three- (lower panel in hidden layers) hidden-layer neural networks. The networks have four input features xi, CFSv2 SST, latitude, longitude, and lead time, which are nonlinearly mapped to the output y (corrected CFSv2 SST). The parameters, including weight (w) and bias (b), are repeatedly updated by the Levenberg–Marquardt backpropagation algorithm that are optimized to minimize the loss function MSE=(y^y)2, where y^ is the observed SST (OISST).

  • Fig. 2.

    The RMSE of the (a)–(d) ANN1 and (e)–(h) ANN3 for (left) training data and (right) testing data, which are calculated between the output of ANN1 or ANN3 and observed SST. The numbers on the x-axis in (a), (b), (e), and (f) are different numbers of hidden-layer neurons of ANN1 and ANN3. (c),(d) and (g),(h) The RMSE of ANN1 and ANN3, respectively, with the optimal neurons in the hidden layer (15 neurons for ANN1 and [10, 10, 10] neurons for ANN3). The numbers on the x axis in (c), (d), (g), and (h) are neural network members 1–20 and the solid line represents the RMSE of the ensemble averaging of ANN1 and ANN3.

  • Fig. 3.

    As in Fig. 2, but for the correlation coefficient (R) of the ANN1 and ANN3, which are calculated between the output of ANN1 or ANN3 and observed SST.

  • Fig. 4.

    Spatial distribution of SST bias on 1 Jul 2018. (a) Difference between the CFSv2 prediction and OISST. (b),(d),(f),(h) Difference between the output obtained from the best, worst, No. 10, and No. 20 members of ANN1 and CFSv2 predicted SST. (c),(e),(g),(i) As in (b), (d), (f), and (h), but for difference between the members of ANN3 and CFSv2 predicted SST.

  • Fig. 5.

    SST error distribution at different forecast times in 2015 (30 Aug and 28 Nov 2015 for the training data) and in 2018 (30 Aug and 28 Nov 2018 for the testing data). (a1)–(d1) Differences between the CFSv2 forecast and OISST. (a2)–(d2) Differences between the ensemble-based ANN1 and CFSv2 and (e1)–(h1) differences between the ensemble-based ANN3 and CFSv2. (a3)–(d3) Differences between the ensemble-based ANN1 and OISST and (e2)–(h2) differences between the ensemble-based ANN3 and OISST.

  • Fig. 6.

    Five selected regions in the Atlantic and Pacific Oceans in this study. The black boxes indicate the three high-latitude regions: region 1 (68°–80°N, 30°W–0°) includes the Barents Sea, region 2 (68°–80°N, 10°–60°E) includes northwestern Atlantic, and region 3 (58°–72°N, 170°E–160°W) includes the Bering Sea and the Chukchi Sea. The red boxes indicate the two subtropical and midlatitude regions: region 4 (30°–45°N, 120°E–180°), region 5 (30°–45°N, 45°W–0°).

  • Fig. 7.

    Regional averaging daily SST from the initial forecast time to day 215 for CFSv2 (red line), OISST (blue line), and ensemble of 20 (a)–(f) ANN1s (black line) and (g)–(l) ANN3s (black line) in 2015 (the training data) and in 2018 (the testing data). (a),(d),(g),(j) Region 1 (60°–80°N, 30°W–0°), (b),(e),(h),(k) region 2 (68°–80°N, 10°–60°E), and (c),(f),(i),(l) region 3 (58°–72°N, 170°E–160°W).

  • Fig. 8.

    Regional averaging absolute error of daily SST between the CFSv2 and OISST (black line), the ensemble-based ANN1 (blue line) and ANN3 (red line), and OISST from the initial forecast time to day215, exemplified by training data in 2015 and testing data in 2018. (a),(c) Region 4 (30°–45°N, 128°W–180°) and (b),(d) region 5 (30°–45°N, 45°W–0°).

  • Fig. 9.

    Scatterplot between the OISST and the ensemble-based (a1)–(a5),(c1)–(c5) ANN1 and (b1)–(b5),(d1)–(d5) ANN3 SST on (a1)–(b5) day 50 and (c1)–(d5) day 180 in 2018. Black dots are CFSv2 SST vs OISST, and red dots are ANN1 and ANN3 SST vs OISST. The dotted line represents the linear regression between the CFSv2 predicted SST and OISST. The solid line represents the linear regression between ANN1 or ANN3 SST and OISST.

All Time Past Year Past 30 Days
Abstract Views 576 237 0
Full Text Views 254 92 15
PDF Downloads 284 72 9