A Deep Learning Filter for the Intraseasonal Variability of the Tropics

Cristiana Stan aDepartment of Atmospheric, Oceanic and Earth Sciences, George Mason University, Fairfax, Virginia

Search for other papers by Cristiana Stan in
Current site
Google Scholar
PubMed
Close
https://orcid.org/0000-0002-0076-0574
and
Rama Sesha Sridhar Mantripragada aDepartment of Atmospheric, Oceanic and Earth Sciences, George Mason University, Fairfax, Virginia

Search for other papers by Rama Sesha Sridhar Mantripragada in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

This paper presents a novel application of convolutional neural network (CNN) models for filtering the intraseasonal variability of the tropical atmosphere. In this deep learning filter, two convolutional layers are applied sequentially in a supervised machine learning framework to extract the intraseasonal signal from the total daily anomalies. The CNN-based filter can be tailored for each field similarly to fast Fourier transform filtering methods. When applied to two different fields (zonal wind stress and outgoing longwave radiation), the index of agreement between the filtered signal obtained using the CNN-based filter and a conventional weight-based filter is between 95% and 99%. The advantage of the CNN-based filter over the conventional filters is its applicability to time series with the length comparable to the period of the signal being extracted.

Significance Statement

This study proposes a new method for discovering hidden connections in data representative of tropical atmosphere variability. The method makes use of an artificial intelligence (AI) algorithm that combines a mathematical operation known as convolution with a mathematical model built to reflect the behavior of the human brain known as artificial neural network. Our results show that the filtered data produced by the AI-based method are consistent with the results obtained using conventional mathematical algorithms. The advantage of the AI-based method is that it can be applied to cases for which the conventional methods have limitations, such as forecast (hindcast) data or real-time monitoring of tropical variability in the 20–100-day range.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Cristiana Stan, cstan@gmu.edu

Abstract

This paper presents a novel application of convolutional neural network (CNN) models for filtering the intraseasonal variability of the tropical atmosphere. In this deep learning filter, two convolutional layers are applied sequentially in a supervised machine learning framework to extract the intraseasonal signal from the total daily anomalies. The CNN-based filter can be tailored for each field similarly to fast Fourier transform filtering methods. When applied to two different fields (zonal wind stress and outgoing longwave radiation), the index of agreement between the filtered signal obtained using the CNN-based filter and a conventional weight-based filter is between 95% and 99%. The advantage of the CNN-based filter over the conventional filters is its applicability to time series with the length comparable to the period of the signal being extracted.

Significance Statement

This study proposes a new method for discovering hidden connections in data representative of tropical atmosphere variability. The method makes use of an artificial intelligence (AI) algorithm that combines a mathematical operation known as convolution with a mathematical model built to reflect the behavior of the human brain known as artificial neural network. Our results show that the filtered data produced by the AI-based method are consistent with the results obtained using conventional mathematical algorithms. The advantage of the AI-based method is that it can be applied to cases for which the conventional methods have limitations, such as forecast (hindcast) data or real-time monitoring of tropical variability in the 20–100-day range.

© 2023 American Meteorological Society. This published article is licensed under the terms of the default AMS reuse license. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Cristiana Stan, cstan@gmu.edu

1. Introduction

Variability of the Earth climate system can be decomposed into a broad spectrum with a continuous distribution (Hasselmann 1976). Separation of signals within a spectral band is the first step in the process of understanding physical mechanisms driving the variability associated with that signal. Numerical methods routinely used in signal analysis (e.g., spectral and weight-based filters) are sensitive to the sample size. These methods work well when applied to time series of observations and climate model simulations but have limitations when applied to model forecast data or real-time monitoring based on observations. One particular example of such limitations is the intraseasonal variability (ISV) of the tropics (20–100 days), which is the target of subseasonal-to-seasonal (S2S) prediction and operational forecast systems. Due to the theoretical limit of predictability of ISV, the length of the S2S forecasts is between 35 and 45 days, which creates challenges for removal of low-frequency variability (>100 days). When analyzing forecast data, using a conventional method for extracting the ISV from total anomaly fields requires blending of forecast with observation to extend the length of the forecast time series and allow for removal low-frequency variability associated with El Niño–Southern Oscillation (ENSO). The padding with observations varies from 90 days (e.g., Gottschalck et al. 2010) to 2 years (Janiga et al. 2018) prior to initialization of the forecast. This step introduces artificial or spurious features that affect the forecast skill. The limited ability to properly diagnose the ISV in the forecast models restricts our ability to understand modeling capabilities for predicting variability of these time scales and narrows the opportunities for improving forecasting systems. Evaluation of ISV in the forecast is important not only for tropical predictions but also for the prediction of atmospheric teleconnections between the tropics and extratropics, which vary from high-impact weather events to large-scale barotropic structures (Stan et al. 2017). While forecast data need to be extended prior to the initialization of the forecast, real-time monitoring based on observations requires extrapolation of data into the future in order to extract the ISV signal present at the current state.

Machine learning and artificial intelligence (ML/AI) approaches have the potential to perform signal processing that overcomes the limitations of conventional statistical approaches. Artificial neural networks (ANN; Rumelhart et al. 1986) have been already used in geosciences for identification and classification of patterns and signals of climate variability (Li et al. 2016; Liu et al. 2016; Barnes et al. 2019; Toms et al. 2020; Yoo et al. 2020; Toms et al. 2021; Labe and Barnes 2021; Mayer and Barnes 2021), improving the signal-to-noise ratio of seismological datasets (Chen et al. 2019), and detection of errors in model generated datasets (Moghim and Bras 2017; Dutta and Bhattacharjya 2022). As ML/AI is based on identifying hidden regularities embedded in the data (Flach 2012), they have potential to succeed in extracting the ISV of tropical atmosphere because a substantial portion of this variability is explained by regularities in the form of large-scale waves (∼10 000 km) and modes that manifest in basic-state variables (e.g., pressure, wind, temperature) as well as in physical phenomena (e.g., rainfall, cloudiness) aggregated across multiple scales by organizing mechanisms. The equatorial waves are known as convectively coupled equatorial waves (e.g., Rossby waves, inertia–gravity waves, mixed-Rossby gravity waves, and Kelvin waves). The dominant modes include the Madden–Julian oscillation (MJO; Madden and Julian 1971, 1972), the boreal summer intraseasonal oscillation (BSISO; Lau and Chan 1986), and the 30–90-day mode (Jiang and Waliser 2009). The review of Serra et al. (2014) offers a detailed description of tropical variability.

Convolutional neural networks (CNNs or ConvNets) represent a class of deep learning ANN. It has been demonstrated that CNNs can be used to approximate any continuous function to certain accuracy, which depends on the depth of the network (Zhou 2020). Methods in this class attempt to extract information using stacked layers of nonlinear information processing algorithms distributed in a hierarchical architecture (LeCun et al. 2015). The literature describes many variants of the CNN architectures (Gu et al. 2018); however, they share similar basic components such as an input layer, a hidden layer, and an output layer. The convolutional layer is one of the hidden layers and is the layer learning feature properties during the training. The feature maps are computed by the convolutional kernels. In this study a one-dimensional (1D) CNN model is used, which is suitable for time series analysis (Wibawa et al. 2022). In 1D CNN models the kernel is a vector. The 1D CNN model is applied to construct a bandpass filter for tropical ISV. To demonstrate the accuracy of the method, the CNN-based filter is first compared to a conventional Lanczos digital filter (Duchon 1979) and then it is applied to problems for which conventional filters have challenges due to the limited sample size and instead surrogate methods are adopted.

This paper is organized as follows: In section 2, we introduce data and methods used in this study along with the description of the 1D CCN-based filter. In section 3, we present the results of applying the CNN filter to extract the intraseasonal variability (30–90 days) from fields relevant to tropical variability such as zonal wind stress (a basic-state variable) and outgoing longwave radiation at the top of the atmosphere (a good indicator of tropical deep convection and associated rainfall). In section 4 we present the downstream impact of the CNN-based filter on calculating the MJO components of the zonal wind stress and outgoing longwave radiation (OLR). We summarize the main findings and discuss perspectives in section 5.

2. Data and methods

a. Data

For wind stress we use a combination of the high-resolution (0.25° latitude–longitude) Blended Sea Winds stress product (Zhang et al. 2006; Peng et al. 2013) and Advanced Scatterometer (Bentamy and Fillon 2012) for the period 1988–2016. Daily means of NOAA interpolated OLR with a horizontal resolution of 2.5° latitude–longitude (Liebmann and Smith 1996) are used for the period 1980–2022. These satellite derived datasets are selected instead of reanalysis because the latter is more likely to be affected by models’ ability to simulate the ISV of the tropics. Daily anomalies for these variables are calculated by removing the climatological mean, defined as the daily average over all years in each dataset. For the CNN model described in the next section, the data are partitioned into a training period (1 January 1988–31 December 2012), a validation period (1 January 2013–31 December 2014), and a testing period (1 January 2015–31 December 2016). To maintain independence of training data from testing/validation data, the climatology is computed only using the training period.

b. 1D CNN

The basic mathematical operation of the 1D CNN model is the convolutional operation, which is defined as the sliding dot product of the signal and weight of the kernel (or a filter):
y^i=k=ppxjkwk,
{xj, j = 1, …, N} is the input time series (e.g., OLR or zonal wind stress at grid point) and {wk, |k| ≤ p, p < N} represent the weights of the convolutional kernel; y^i is the output signal at grid point i.

The weights of the convolutional kernel (wk) at each grid point are estimated by training the 1D CNN model. The kernel size p needs to be set before the training process starts and remains fixed. In the CNN model used here, the assumption is that the convolution operation of a daily signal with a kernel of length p will retain signals with a period greater than p days.

The architecture of the CNN model consists of an input layer, a subtract layer separating two convolutional layers, and an output layer. The schematic of the model is shown in Fig. 1. In the input layer, the time series of daily anomaly maps with dimension (latitude, longitude) are parsed into time series at each grid point. In this layer, the dimension of the input dataset is reshaped into (time, grid), where grid = latitude × longitude. Data scaling (e.g., standardization and normalization) sometimes recommended as a preprocessing step in using neural network models (Wang et al. 2006) is not performed because the model uses only one input variable and the output variable has the same units as the input variable. In the first convolutional layer, each time series of daily anomalies is passed through a convolutional layer with the kernel size p = 90. The size of the kernel is determined by the intention to design a filter applicable to operational seasonal forecasts, which have a typical length of 90 days (e.g., NCEP CFSv2; Saha et al. 2014). This convolution operation retains signals with variability greater than 90 days. Next, the output of the linear operation such as convolution is passed through the subtraction layer. In this layer, the output of the first convolutional layer is subtracted from the input. There are no learnable parameters in this layer. After subtraction, the remaining signal retains variability of less than 90 days. Then this 90-day high-pass-filtered signal is passed to the second convolutional layer with the kernel size p = 30, and the convolution operation will retain variability between 30 and 90 days. Trials using different lengths for the second kernel indicate a small influence of this kernel size on the accuracy of the model; the length of the first kernel has the largest impact (not shown). The convolutional layers do not change the dimension of the output, and the padding is set to zero. In the output layer, the filtered time series at each grid point are mapped into times series of daily filtered maps with dimension (latitude, longitude). Finally, the output time series at each grid point are compared to the 30–90-day Lanczos-filtered time series at the same grid point. In this algorithm, the filtering is done independently for each grid point, that is, no spatial pattern information is used in the training.

Fig. 1.
Fig. 1.

The architecture of the CNN-based filter. All layers have the same size, which is the sample size. The dashed–dotted line denotes the kernel size. In the first convolutional layer, the kernel size p = 90 and in the second convolutional layer p = 30. Grid = lon × lat. Time represents the number of samples (days). In each layer, a rectangle represents one grid point. The horizontal arrows show the workflow of the algorithm excluding the hidden layers in the convolutional layers. The gray line connecting the input layer and the subtraction layer denotes that input data can be passed to the subtract layer from the input layer.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

The CNN filter is configured to iteratively train for 500 epochs to minimize the loss function with the option to stop the training earlier if the error on the validation data does not improve for 10 consecutive epochs. For clarification, an epoch is referred to as one pass of the entire training dataset through the CNN. At the end of the forward pass, the error is estimated and in the backward pass the error is used to adjust the weights to reduce the resultant error for that respective training batch and done iteratively for each batch in each epoch. The optimization of the CNN parameters is based on the mean-squared error (MSE) loss function:
MSE=1Mi=1M(yiy^i)2,
where y^i is the output from the network and yi is the desired output from the network, which in this case represents the 30–90-day filtered signal obtained by applying a Lanczos digital filter to daily anomalies. The parameter M represents the length of the time series used for training. The Adam optimization algorithm (Kingma and Ba 2014) is used to minimize the loss function and update the CNN parameters. The default hyperparameters for the Adam optimization algorithm are learning rate, α = 0.001; exponential decay rate for the first and second moment estimates, β1 = 0.9, β2 = 0.999; and the very small number to prevent any division by zero in the implementation, ϵ = 10−8. The CNN training is stopped when the MSE on the training data does not improve for 10 consecutive epochs for a threshold value of 0.001 squared units of the filtered variable. When designing a CNN model, a common challenge is model overfitting. An overfit model displays high accuracy when predicting the training data while failing to generalize to the new unseen samples. One way to detect model tendency to overfit is to compare loss curves during the training and validation or testing periods (Giante et al. 2019). Our inspection of loss curves at random grid points indicates a small gap between the error of the model output based on training data and validation data. An example for the OLR data is shown in Fig. 2.
Fig. 2.
Fig. 2.

Training and validation loss for the OLR data at a grid point located at 160°E on the equator.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

The CNN model is trained independently for each grid point and dataset. For both datasets, the training period is 1 January 1988–31 December 2012, the validation period is 1 January 2013–31 December 2014, and the testing period is 1 January 2015–31 December 2016. The training period provides 9132 samples or days (time dimension in Fig. 1) at each grid point (grid dimension in Fig. 1). Because ISV is present year-round, all days are suitable for being used in the training. The impact of sample size on the training error of deep ANNs is an open question and is problem dependent (Chattopadhyay et al. 2020).

c. Evaluation metrics

The performance of the CNN model during the testing period is evaluated using three metrics: 1) the root-mean-squared error (RMSE), 2) the index of agreement (IOA), and 3) the coefficient of determination (R2). The IOA, developed (Willmott 1981) as a standardized measure of the degree of model prediction error, varies between 0 and 1. A value of 1 indicates a perfect match, and 0 indicates no agreement at all between the predicted and observed data. The IOA represents the ratio of the mean-squared error and the potential error and is calculated as
IOA=1i=1n(x^ixi)2i=1n(|x^ix¯|+|xix¯|)2,
where xi is the expected value or the truth, x^i is the value estimated by the CNN model, and x¯ is the mean of expected values. The parameter n represents the length of the time series produced by the CNN model during the testing period. Unlike the RMSE, IOA is a bounded and nondimensional measure. The dimensionless facilitates the comparison of agreement among different pairs of datasets with different units (Duveiller et al. 2016). The RMSE gives estimates of the average errors in the model whereas IOA provides information about the relative size of the average difference (Willmott 1982). A model performs well when the RMSE approaches zero and IOA is close to 1 or 100%. The R2 measures the proportion of the total variability explained by the prediction model and is interpreted as a measure of the correspondence in phase between the predicted variable and verification (Murphy 1995).

3. Results

Intraseasonal variability of the tropics

The CNN-based filter, was applied to extract the ISV in the 30–90-day range from the daily anomalies of the zonal component of the wind stress vector and OLR. The results for the wind stress anomalies and OLR are shown in Fig. 3 during the first year of the testing period, that is, 1 January 2015–31 December 2015. For a complete description of each field, the total daily anomalies are also included. The signal filtered using the conventional Lanczos filtering serves as the truth. The Lanczos filtering is applied to the whole year of data, which requires additional data before the first and last date of the analyzed period. For example, to obtain a filtered time series that begins on 1 January 2015 the time series on which Lanczos filter is applied begins 90 days prior, that is, 2 October 2014. If the end date of the filtered time series is 31 December 2015, the end date of the unfiltered time series is 31 March 2016. For the zonal wind stress, the filtered signal based on the CNN method is shown in Fig. 3c along with the 30–90-day bandpass-filtered signal (Fig. 3b) obtained using the conventional Lanczos filtering method.

Fig. 3.
Fig. 3.

Hovmöller diagrams averaged over 7.5°S–7.5°N for the testing period 1 Jan 2015–31 Dec 2015. (top) Zonal wind stress (N m−2) and (bottom) OLR (W m−2). (a),(e) The total daily anomaly of the fields. (b),(f) The 30–90-day filtered anomalies using the Lanczos filter. (c),(g) The 30–90-day filtered anomalies using the CNN-based filter. (d),(h) The Lanczos filtered anomalies minus the CNN-based filtered anomalies.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

The direct comparison (Figs. 3b,c) and the difference between the filtered signals obtained using the conventional and ML/AI methods (Fig. 3d) show a good agreement between the patterns in the ISV signal with small differences in the amplitudes. The amplitudes of zonal wind stress anomalies obtained using the CNN filter are 37% larger (smaller) than the amplitudes of filtered zonal wind anomalies obtained using the Lanczos method. This number is calculated by dividing the absolute value of maximum difference between filtered amplitudes using the two methods by the absolute value of maximum (minimum) amplitude of Lanczos-filtered anomalies. Results for the OLR anomalies are shown in Figs. 3e–h for the same period (1 January 2015–31 December 2015) as for the zonal wind stress. The OLR based results also suggest a good agreement between the two methods. The difference in the amplitude of the filtered OLR produced by the two methods is in the same ballpark (31.5%) as for the zonal wind stress.

A cross-validation analysis (k fold; Geisser 1975) for the OLR was conducted by resampling the years of training, validation and testing as shown in Table 1. In our method, the training data is separated into subsets of the same size. The validation and testing period are independent and of each other and also swapped. The mean average error and standard deviation for the testing period over all k = 6 trials are shown in Fig. 4. The mean error (ME) and standard deviation (σ) are defined as
ME=1Ni=1N(yiy^i), and
σ=1Ni=1N[(yiy^i)ME]2,
where y^i is the CNN-filtered signal, yi is the Lanczos-filtered signal, and N denotes the number of trials.
Table 1.

Subsets of years used for k-fold cross validation.

Table 1.
Fig. 4.
Fig. 4.

Hovmöller diagrams of the effectiveness of the CNN-based filtering model measured by (a) ME and (b) standard deviation (σ) obtained by resampling the training, validation, and testing periods into six folds. The domain is an average over 7.5°S–7.5°N.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

As in Fig. 3h, the mean error (Fig. 4a) and standard deviation (Fig. 4b) also emphasize the model’s limitations at the ends of the time series. The errors manifest in the first 30 days, after which the CNN models become equivalent to each other.

To further evaluate the ability of the CNN-based filter to isolate the ISV in the 30–90-day range, Figs. 5 and 6 show additional metrics constructed using results based on the zonal wind stress (Fig. 5) and OLR (Fig. 6) for the testing period. Based on the maximum variance of the 30–90-day filtered anomalies, two regions are selected: 125°–160°E, 7.5°S–7.5°N for the zonal wind stress and 160°–190°E, 7.5°S–7.5°N for the OLR. The comparison between the zonal wind stress (Fig. 5a) and OLR (Fig. 6a) filtered time series using the conventional and CNN-based filtering methods reveals a notable difference at the beginning of the time series.

Fig. 5.
Fig. 5.

(a) Time series of 30–90-day filtered anomalies using the Lanczos filter (red solid curve) and the CNN-based filter (blue dashed line) averaged over 125°–160°E; 7.5°S–7.5°N for the testing period 1 Jan 2015–31 Dec 2016 along with their correlation coefficient (r). (b) The power spectrum density times frequency of filtered anomalies shown in (a). (c) IOA and (d) RMSE between the 7.5°S and 7.5°N averaged filtered anomalies using the Lanczos and CNN-based filters during the testing period 1 Jan 2015–31 Dec 2016.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

Fig. 6.
Fig. 6.

As in Fig. 5, but for OLR anomalies averaged over 160°–190°E, 7.5°S–7.5°N.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

There are other cases where the end of the time series also shows differences between the two filtering methods. This difference is related to the constraint in the convolutional layer to maintain the length of the time series. The power spectra (Figs. 5b and 6b) of the two time series show that the CNN filtering method captures very well the spectral peak centered around 35 days and slightly overestimates the amplitude of the spectral peak located close to 60 days. The CNN method also shows an adequate removal of the spectral power existing in data outside of the intended spectral window. The RMSE (Figs. 5c and 6c) and IOA (Figs. 5d and 6d) between the time series constructed using the two methods also reveal a good spatial agreement at all longitudes in a tropical channel between 7.5°S and 7.5°N. RMSE is used as an indicator of the outliers and IOA is a measure of the degree to which the model’s predictions are free of errors (Willmott 1981).

Consistent with the IOA, R2 for the zonal wind stress (Fig. 7a) is lower than for the OLR (Fig. 7b). One can speculate the better fit for the OLR is due to a stronger MJO signal in this field compared to the wind (e.g., Waliser et al. 2009) and/or a better satellite product for the OLR than surface wind stress.

Fig. 7.
Fig. 7.

R2 of the filtered anomalies using the Lanczos and CNN-based filters during 1 Jan 2015–31 Dec 2016 for (a) zonal wind stress and (b) OLR. Time series at each zonal grid point represent the average between 7.5°S and 7.5°N.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

4. Applications of the CNN-based filter

The intraseasonal anomalies are used for model evaluations, to construct metrics and diagnostics for studying the properties of climate system on S2S time scale, and to characterize the tropical oscillations such as the MJO and BSISO, the two dominant modes of tropical ISV. Lybarger et al. (2020) introduced a metric designed to characterize the interaction between the MJO component of the wind stress and a low-frequency (∼48 months) oscillation of the tropical Pacific, ENSO. The metric is used to evaluate the ENSO forecast skill of a seasonal forecast system. The key element of the metric is the MJO component of the wind stress. The first step in calculating the MJO component in the forecast anomalies of the wind stress is to extract the ISV variability from the forecast anomalies and then project the observed patterns of MJO [the first four empirical orthogonal functions (EOFs)] onto the ISV anomalies. Because the length of the seasonal forecasts is 90 days, conventional filtering methods for extracting ISV cannot be applied. For example, a Lanczos filter requires 181 days, and a frequency of (90 days)−1 cannot be extracted by a Fourier analysis. Thus, proxy methods have been developed based on the total (unfiltered) daily anomalies of the wind stress. We used the CNN-based filtered anomalies to compute the MJO component of the zonal wind stress (τxMJO) in the tropics and compare the results with the case when τxMJO is extracted from unfiltered daily anomalies as in Lybarger et al. (2020). Figure 8a shows the IOA between τxMJO computed using daily anomalies filtered using a 30–90-day Lanczos filter (Fig. 3b) and τxMJO computed using unfiltered daily wind stress anomalies (Fig. 3a). In both cases, τxMJO is computed following the method of Lybarger et al. (2020). Except for 1995 and 2010, the IOA is above 0.55. For comparison, Fig. 8b shows the IOA between τxMJO computed using daily anomalies of zonal wind stress filtered using a 30–90-day Lanczos filter (Fig. 3b) and τxMJO computed using daily anomalies filtered using the CNN-based filter (Fig. 3c). To evaluate the impact of the CNN-based filtering method, Fig. 8c shows the difference between the two IOAs. In this comparison, τxMJO computed using the Lanczos-filtered data is the truth. The positive values in the difference plot indicate that CNN-filtered data result in a better estimate of τxMJO than the proxy method used by Lybarger et al. (2020). The small range of values in Fig. 8b indicates a consistent ability of the CCN-based filter to reproduce the features of the Lanczos filter.

Fig. 8.
Fig. 8.

Longitude–time plots of (a) IOA between τxMJO based on Lanczos-filtered daily anomalies (τx3090(L)) and τxMJO based on unfiltered daily anomalies (τx). (b) IOA between τxMJO based on Lanczos-filtered daily anomalies and τxMJO based on CNN-filtered daily anomalies (τx3090(CNN)). (c) The difference between (b) and (a). See text for details.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

In all years, the IOA based on the CNN filter is larger than the IOA when using unfiltered anomalies. In the western and central Pacific, the IOA of the CNN-based method is much larger than the IOA of unfiltered anomalies. In the eastern Pacific, the difference between the IOAs is slightly smaller than in the other regions. This smaller difference in the eastern Pacific could be explained by the fact that MJO is more active in the western Pacific than in the east, where low-frequency variability (e.g., ENSO) is the dominant signal.

Another application that can benefit from using the CNN-based filter is the real-time monitoring of the MJO (e.g., Gottschalck et al. 2010; Kikuchi et al. 2012; Kikuchi 2020). We applied the method described by Kikuchi (2020) to extract the MJO signal in the OLR anomalies using a proxy method for computing the daily filtered anomalies and the CNN-based filtering method. In the proxy method, first, anomalies are constructed by subtracting from each daily value the climatological mean and three harmonics of the climatological annual cycle. Second, these anomalies are filtered by subtracting the mean of the previous 40 days from each daily anomaly. In both methods, the MJO signal is then extracted from the ISV by projecting the daily filtered anomalies onto the two extended EOFs (EEOFs) precomputed by Kikuchi (2020) and then multiplying the resulting principal components (PCs) time series with the EEOFs (OLRMJO = EEOF1 × PC1 + EEOF2 × PC2). Results from the two methods are compared in Fig. 9, which shows the OLR-filtered anomalies and the reconstructed OLRMJO.

Fig. 9.
Fig. 9.

Hovmöller diagrams of daily OLR filtered anomalies (shading) and MJO signal (contours) averaged over 7.5°S–7.5°N for the period 1 Sep 2018–31 Mar 2019. Above the gray dashed line, anomalies are filtered using a 25–90-day Lanczos filter. Below the gray dashed line, anomalies are filtered using (a) the proxy method (see text for details) and (b) the CNN-based method (right). The gray line denotes the last date for which data would be available for the proxy method. In both cases the MJO is computed following Kikuchi (2020) method.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

In the top part of the figures (above the gray dashed line), anomalies are filtered using a 25–90-day bandpass Lanczos filter. In the bottom part of the figures (below the gray dashed line), anomalies are filtered using the proxy method (Fig. 9a) and the CNN-based method (Fig. 9b). It is easy to see that CNN-based filtered anomalies and the MJO signal remain coherent with the existing structures and the amplitude of the ISV (filtered anomalies) is not distorted as in the case of proxy method. Relative to the period when the Lanczos filter was applied, the amplitudes of the anomalies become stronger for the period when the proxy filtering method is used. The MJO is characterized by the amplitude and phase. The amplitude is typically measured by the principal components PC12+PC22 normalized by their standard deviation. The phase space can be analyzed using the Wheeler–Hendon diagram (Wheeler and Hendon 2004). Figure 10 shows the PCs of two MJO events that that occurred in winter 2018/19 along with the phase space defined by the two PCs. Using the proxy method developed by Kikuchi (2020), the amplitude of the MJO activity is slightly underestimated in February and slightly overestimated in March, relative to the Lanczos method, which represents the truth. The well-known 10–15 day lagging between PC1 and PC2 becomes shorter in the proxy method. The phase diagrams indicate that MJO phases 7–8 are distorted by the proxy method. The CNN-based filtering produces results in better agreement with the Lanczos method than the proxy method does.

Fig. 10.
Fig. 10.

(top) Comparison of MJO amplitude (PC1 and PC2) and (bottom) phase space for the period 1 Sep 2018–31 Mar 2019. The black line denotes the Lanczos filter, and the calculation is done assuming availability of data in the future. The blue and red lines correspond to the proxy and CNN calculations. The vertical line on 1 Jan 2019 denotes the date after which no data would be available for applying the Lanczos filtering.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

A comparison of the power spectra of the filtered anomalies and MJO signal (Fig. 11) illustrates the efficiency of the CNN-based method at filtering the intended frequencies with the right power spectral density and not introducing as many spurious frequencies.

Fig. 11.
Fig. 11.

Power density spectra using a 25–90-day Lanczos filter (red solid line), Kikuchi et al. (2012) proxy method (blue dashed line), and CNN-based method (red dashed line). (a) Daily OLR anomalies. (b) MJO component of the daily OLR anomalies. Power spectra are calculated for the time series of area average over 160°–190°E; 7.5°S–7.5°N between 1 Sep 2018 and 31 Mar 2019.

Citation: Artificial Intelligence for the Earth Systems 2, 4; 10.1175/AIES-D-22-0079.1

5. Conclusions

In this study we have demonstrated that CNN methods can be applied to construct a bandpass filter for intraseasonal variability (30–90 days) of the tropical atmosphere. The CNN-based filter contains two convolutional layers with 90 weights in the first convolutional layer and 30 weights in the second convolutional layer. Our results suggest that the CNN-based filter is a robust new technique for signal processing of geophysical data. The CNN-based filter yields similar results to the Lanczos filter and with no loss of data at the beginning and end of the analyzed period. The Lanczos filter uses 181 weights (or points), therefore 90 points (days in this case) are lost at the beginning and end of the time series. In fact, a Lanczos filtering cannot be applied to time series with the length of 90 days if the cutoff frequency is (90 days)−1. The CNN-based filter only shows a small reduction in accuracy at the ends of time series The number of weights used by the Lanczos filter are determined to ensure a reduction of the Gibbs phenomena that occurs in the vicinity of a discontinuity when the Fourier analysis is carried out (Duchon 1979). Since the CNN-based filter does not use a Fourier analysis, generation of Gibbs waves is not possible.

The CNN-based filter works well when applied to basic-state variables (wind stress) and phenomena-based variables (OLR). For both variables, the relative difference from the conventional Lanczos filtering method is in the ballpark of 30%–40%. The IOA for the state variable is also almost on par with that for OLR. By further using the filtered data to successfully construct the MJO signal we demonstrate that the classification determined by the CNN-based filter is based on physical principles as shown in the power spectrum information and MJO phase diagram.

The CNN-based filter can be applied to extracting ISV from the forecasts. Current extracting methodologies (e.g., Gottschalck et al. 2010; Janiga et al. 2018) have some disadvantages such as the contamination of the MJO signal from the higher-frequency variability and low-frequency variability associated with ENSO. The filter described in this study works for seasonal forecasts. Further developments are needed to make it applicable to S2S forecasts, which are shorter, for example, 45 days (Vitart et al. 2017). The 90-day requirement embedded in the CNN-filter presented in this study prevents its application to shorter time series.

Acknowledgments.

The authors acknowledge support from the NOAA/WPO through Grant NA20OAR4590316. CS was also partially supported by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory provided by the director of the Office of Science of the U.S. Department of Energy (Contract DEAC02-06CH11357) though a joint appointment. The study was partially supported by resources provided by the Office of Research Computing at George Mason University and funded in part by grants from the National Science Foundation (Awards 1625039 and 2018631). The authors appreciate the feedback provided by three anonymous reviewers.

Data availability statement.

The CNN is built using Keras, a deep learning framework that wraps Tensorflow. The source code for the CNN-filter has been made publicly available (https://github.com/cristianastan2/AIES-Deep-Learning-Filter). QSCAT data are available at https://www.remss.com/missions/qscat/. DASCAT data are deposited online (http://apdrc.soest.hawaii.edu/datadoc/ascat.php). OLR data are deposited online (https://climatedataguide.ucar.edu/climate-data/outgoing-longwave-radiation-olr-avhrr). The EEOFs used for computing the MJO signal in the OLR are deposited online (http://iprc.soest.hawaii.edu/users/kazuyosh/Bimodal_ISO.html).

REFERENCES

  • Barnes, E. A., J. W. Hurell, I. Ebert-Uphoff, C. Anderson, and D. Anderson, 2019: Viewing forced climate patterns through an AI lens. Geophys. Res. Lett., 46, 13 38913 398, https://doi.org/10.1029/2019GL084944.

    • Search Google Scholar
    • Export Citation
  • Bentamy, A., and D. C. Fillon, 2012: Gridded surface wind fields from Metop/ASCAT measurements. Int. J. Remote Sens., 33, 17291754, https://doi.org/10.1080/01431161.2011.600348.

    • Search Google Scholar
    • Export Citation
  • Chattopadhyay, A., P. Hassanzadeh, and S. Pasha, 2020: Predicting clustered weather patterns: A test case for applications of convolutional neural networks to spatio-temporal climate data. Sci. Rep., 10, 1317, https://doi.org/10.1038/s41598-020-57897-9.

    • Search Google Scholar
    • Export Citation
  • Chen, Y., M. Zhang, M. Bai, and W. Chen, 2019: Improving the signal-to-noise ratio of seismological datasets by unsupervised machine learning. Seismol. Res. Lett., 90, 15521564, https://doi.org/10.1785/0220190028.

    • Search Google Scholar
    • Export Citation
  • Duchon, C. E., 1979: Lanczos filtering in one and two dimensions. J. Appl. Meteor., 18, 10161022, https://doi.org/10.1175/1520-0450(1979)018<1016:LFIOAT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Dutta, D., and R. K. Bhattacharjya, 2022: A statistical bias correction technique for global climate model predicted near-surface temperature in India using the generalized regression neural network. J. Water Climate Change, 13, 854871, https://doi.org/10.2166/wcc.2022.214.

    • Search Google Scholar
    • Export Citation
  • Duveiller, G., D. Fasbender, and M. Meroni, 2016: Revisiting the concept of a symmetric index of agreement for continuous datasets. Sci. Rep., 6, 19401, https://doi.org/10.1038/srep19401.

    • Search Google Scholar
    • Export Citation
  • Flach, P., 2012: Machine Learning: The Art and Science of Algorithms that Make Sense of Data. Cambridge University Press, 409 pp.

  • Geisser, S., 1975: The predictive sample reuse methods with applications. J. Amer. Stat. Assoc., 70, 320328, https://doi.org/10.1080/01621459.1975.10479865.

    • Search Google Scholar
    • Export Citation
  • Giante, S., A. S. Charles, S. Krishnaswamy, and G. Mishe, 2019: Visualizing the PHATE of neural networks. arXiv, 1908.02831v1, https://doi.org/10.48550/arXiv.1908.02831.

  • Gottschalck, J., and Coauthors, 2010: A framework for assessing operational Madden–Julian oscillation forecasts: A CLIVAR MJO Working Group project. Bull. Amer. Meteor. Soc., 91, 12471258, https://doi.org/10.1175/2010BAMS2816.1.

    • Search Google Scholar
    • Export Citation
  • Gu, J., and Coauthors, 2018: Recent advances in convolutional neural networks. Pattern Recognit., 77, 354377, https://doi.org/10.1016/j.patcog.2017.10.013.

    • Search Google Scholar
    • Export Citation
  • Hasselmann, K., 1976: Stochastic climate models. Part I: Theory. Tellus, 28, 473485, https://doi.org/10.1111/j.2153-3490.1976.tb00696.x.

    • Search Google Scholar
    • Export Citation
  • Janiga, M. A., C. J. Schreck III, J. A. Ridout, M. Flatau, N. P. Barton, E. J. Metzger, and C. A. Reynolds, 2018: Subseasonal forecasts of convectively coupled equatorial waves and the MJO: Activity and predictive skill. Mon. Wea. Rev., 146, 23372360, https://doi.org/10.1175/MWR-D-17-0261.1.

    • Search Google Scholar
    • Export Citation
  • Jiang, X., and D. E. Waliser, 2009: Two dominant subseasonal variability modes of the eastern Pacific ITCZ. Geophys. Res. Lett., 36, L04704, https://doi.org/10.1029/2008GL036820.

    • Search Google Scholar
    • Export Citation
  • Kikuchi, K., 2020: Extension of the bimodal intraseasonal oscillation index using JRA-55 reanalysis. Climate Dyn., 54, 919933, https://doi.org/10.1007/s00382-019-05037-z.

    • Search Google Scholar
    • Export Citation
  • Kikuchi, K., B. Wang, and Y. Kajikawa, 2012: Bimodal representation of the tropical intraseasonal oscillation. Climate Dyn., 38, 19892000, https://doi.org/10.1007/s00382-011-1159-1.

    • Search Google Scholar
    • Export Citation
  • Kingma, D. P., and J. Ba, 2014: Adam: A method for stochastic optimization. arXiv, 1412.6980v9, https://doi.org/10.48550/arXiv.1412.6980.

  • Labe, Z. M., and E. A. Barnes, 2021: Detecting climate signals using explainable AI with single-forcing large ensembles. J. Adv. Model. Earth Syst., 13, e2021MS002464, https://doi.org/10.1029/2021MS002464.

    • Search Google Scholar
    • Export Citation
  • Lau, K.-M., and P. H. Chan, 1986: Aspects of the 40-50 day oscillation during the Northern summer as inferred from outgoing longwave radiation. Mon. Wea. Rev., 114, 13541367, https://doi.org/10.1175/1520-0493(1986)114<1354:AOTDOD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • LeCun, Y., Y. Bengio, and G. Hinton, 2015: Deep learning. Nature, 521, 436444, https://doi.org/10.1038/nature14539.

  • Li, S.-F., F. M. B. Jacques, R. A. Spicer, T. Su, T. E. V. Spicer, J. Yang, and Z.-K. Zhou, 2016: Artificial neural networks reveal a high-resolution climatic signal in leaf physiognomy. Palaeogeogr. Palaeoclimatol. Palaeoecol., 442, 111, https://doi.org/10.1016/j.palaeo.2015.11.005.

    • Search Google Scholar
    • Export Citation
  • Liebmann, B., and C. Smith, 1996: Description of a complete (interpolated) outgoing longwave radiation dataset. Bull. Amer. Meteor. Soc., 77, 12751277, https://doi.org/10.1175/1520-0477-77.6.1274.

    • Search Google Scholar
    • Export Citation
  • Liu, Y., and Coauthors, 2016: Application of deep convolutional neural networks for detecting extreme weather in climate datasets. arXiv, 1605.01156v1, https://doi.org/10.48550/arXiv.1605.01156.

  • Lybarger, N. D., C.-S. Shin, and C. Stan, 2020: MJO wind energy and prediction of El Niño. J. Geophys. Res. Oceans, 125, e2020JC016732, https://doi.org/10.1029/2020JC016732.

    • Search Google Scholar
    • Export Citation
  • Madden, R. A., and P. R. Julian, 1971: Detection of a 40–50 day oscillation in the zonal wind in the tropical Pacific. J. Atmos. Sci., 28, 702708, https://doi.org/10.1175/1520-0469(1971)028<0702:DOADOI>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Madden, R. A., and P. R. Julian, 1972: Description of global-scale circulation cells in the tropics with 40–50 day period. J. Atmos. Sci., 29, 11091123, https://doi.org/10.1175/1520-0469(1972)029<1109:DOGSCC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Mayer, K. J., and E. A. Barnes, 2021: Subseasonal forecasts of opportunity identified by an explainable neural network. Geophys. Res. Lett., 48, e2020GL092092, https://doi.org/10.1029/2020GL092092.

    • Search Google Scholar
    • Export Citation
  • Moghim, S., and R. L. Bras, 2017: Bias correction of climate model temperature and precipitation using artificial neural networks. J. Hydrometeor., 18, 18671884, https://doi.org/10.1175/JHM-D-16-0247.1.

    • Search Google Scholar
    • Export Citation
  • Murphy, A., 1995: The coefficients of correlation and determination as measures of performance in forecast verification. Wea. Forecasting, 10, 681688, https://doi.org/10.1175/1520-0434(1995)010<0681:TCOCAD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Peng, G., H.-M. Zhang, H. P. Frank, J.-R. Bidlot, M. Higaki, S. Stevens, and W. R. Hakins, 2013: Evaluation of various surface wind products with OceanSITES buoy measurements. Wea. Forecasting, 28, 12811303, https://doi.org/10.1175/WAF-D-12-00086.1.

    • Search Google Scholar
    • Export Citation
  • Rumelhart, D. E., G. E. Hinton, and R. J. Williams, 1986: Learning representation by back-propagating errors. Nature, 323, 533536, https://doi.org/10.1038/323533a0.

    • Search Google Scholar
    • Export Citation
  • Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 21852208, https://doi.org/10.1175/JCLI-D-12-00823.1.

    • Search Google Scholar
    • Export Citation
  • Serra, Y. L., X. Jiang, B. Tian, J. Amador-Astua, E. D. Maloney, and G. N. Kiladis, 2014: Tropical intraseasonal modes of the atmosphere. Annu. Rev. Environ. Resour., 39, 189215, https://doi.org/10.1146/annurev-environ-020413-134219.

    • Search Google Scholar
    • Export Citation
  • Stan, C., D. M. Straus, J. S. Frederiksen, H. Lin, E. D. Maloney, and C. Schumacher, 2017: Review of tropical-extratropical teleconnections on intraseasonal time scales. Rev. Geophys., 55, 902937, https://doi.org/10.1002/2016RG000538.

    • Search Google Scholar
    • Export Citation
  • Toms, B. A., E. A. Barnes, and I. Ebert-Uphoff, 2020: Physically interpretable neural networks for the Geosciences: Applications to Earth System variability. J. Adv. Model. Earth Syst., 12, e2019MS002002, https://doi.org/10.1029/2019MS002002.

    • Search Google Scholar
    • Export Citation
  • Toms, B. A., K. Kashinath, Prabhat, and D. Yand, 2021: Testing the reliability of interpretable neural networks in geoscience using Madden-Julian oscillation. Geosci. Model Dev., 14, 44954508, https://doi.org/10.5194/gmd-14-4495-2021.

    • Search Google Scholar
    • Export Citation
  • Vitart, F., and Coauthors, 2017: The Subseasonal to Seasonal (S2S) Prediction project database. Bull. Amer. Meteor. Soc., 98, 163173, https://doi.org/10.1175/BAMS-D-16-0017.1.

    • Search Google Scholar
    • Export Citation
  • Waliser, D., and Coauthors, 2009: MJO simulation diagnostics. J. Climate, 22, 30063030, https://doi.org/10.1175/2008JCLI2731.1.

  • Wang, W., P. H. A. J. M. Van Gelder, J. K. Vrijling, and J. Ma, 2006: Forecasting daily streamflow using hybrid ANN models. J. Hydrol., 324, 383399, https://doi.org/10.1016/j.jhydrol.2005.09.032.

    • Search Google Scholar
    • Export Citation
  • Wheeler, M. C., and H. H. Hendon, 2004: An all-season real-time multivariate MJO index: Development of an index for monitoring and prediction. Mon. Wea. Rev., 132, 19171932, https://doi.org/10.1175/1520-0493(2004)132<1917:AARMMI>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wibawa, A. P., A. B. P. Utama, H. Elmunsyah, U. Pujianto, F. A. Dwiyanto, and L. Hernandez, 2022: Time-series analysis with smoothed convolutional neural network. J. Big Data, 9, 44, https://doi.org/10.1186/s40537-022-00599-y.

    • Search Google Scholar
    • Export Citation
  • Willmott, C. J., 1981: On the validation of models. Phys. Geogr., 2, 184194, https://doi.org/10.1080/02723646.1981.10642213.

  • Willmott, C. J., 1982: Some comments on the evaluation of the model performance. Bull. Amer. Meteor. Soc., 63, 13091313, https://doi.org/10.1175/1520-0477(1982)063<1309:SCOTEO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Yoo, C., Y. Lee, D. Cho, J. Im, and D. Han, 2020: Improving local climate zone classification using incomplete building data and sentinel 2 image based on convolutional neural networks. Remote Sens., 12, 3552, https://doi.org/10.3390/rs12213552.

    • Search Google Scholar
    • Export Citation
  • Zhang, H.-M., J. J. Bates, and R. W. Reynolds, 2006: Assessment of composite global sampling: Seas surface wind speed. Geophys. Res. Lett., 33, L17714, https://doi.org/10.1029/2006GL027086.

    • Search Google Scholar
    • Export Citation
  • Zhou, D.-X., 2020: Universality of deep convolutional neural networks. Appl. Comput. Harmonic Anal., 48, 787794, https://doi.org/10.1016/j.acha.2019.06.004.

    • Search Google Scholar
    • Export Citation
Save
  • Barnes, E. A., J. W. Hurell, I. Ebert-Uphoff, C. Anderson, and D. Anderson, 2019: Viewing forced climate patterns through an AI lens. Geophys. Res. Lett., 46, 13 38913 398, https://doi.org/10.1029/2019GL084944.

    • Search Google Scholar
    • Export Citation
  • Bentamy, A., and D. C. Fillon, 2012: Gridded surface wind fields from Metop/ASCAT measurements. Int. J. Remote Sens., 33, 17291754, https://doi.org/10.1080/01431161.2011.600348.

    • Search Google Scholar
    • Export Citation
  • Chattopadhyay, A., P. Hassanzadeh, and S. Pasha, 2020: Predicting clustered weather patterns: A test case for applications of convolutional neural networks to spatio-temporal climate data. Sci. Rep., 10, 1317, https://doi.org/10.1038/s41598-020-57897-9.

    • Search Google Scholar
    • Export Citation
  • Chen, Y., M. Zhang, M. Bai, and W. Chen, 2019: Improving the signal-to-noise ratio of seismological datasets by unsupervised machine learning. Seismol. Res. Lett., 90, 15521564, https://doi.org/10.1785/0220190028.

    • Search Google Scholar
    • Export Citation
  • Duchon, C. E., 1979: Lanczos filtering in one and two dimensions. J. Appl. Meteor., 18, 10161022, https://doi.org/10.1175/1520-0450(1979)018<1016:LFIOAT>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Dutta, D., and R. K. Bhattacharjya, 2022: A statistical bias correction technique for global climate model predicted near-surface temperature in India using the generalized regression neural network. J. Water Climate Change, 13, 854871, https://doi.org/10.2166/wcc.2022.214.

    • Search Google Scholar
    • Export Citation
  • Duveiller, G., D. Fasbender, and M. Meroni, 2016: Revisiting the concept of a symmetric index of agreement for continuous datasets. Sci. Rep., 6, 19401, https://doi.org/10.1038/srep19401.

    • Search Google Scholar
    • Export Citation
  • Flach, P., 2012: Machine Learning: The Art and Science of Algorithms that Make Sense of Data. Cambridge University Press, 409 pp.

  • Geisser, S., 1975: The predictive sample reuse methods with applications. J. Amer. Stat. Assoc., 70, 320328, https://doi.org/10.1080/01621459.1975.10479865.

    • Search Google Scholar
    • Export Citation
  • Giante, S., A. S. Charles, S. Krishnaswamy, and G. Mishe, 2019: Visualizing the PHATE of neural networks. arXiv, 1908.02831v1, https://doi.org/10.48550/arXiv.1908.02831.

  • Gottschalck, J., and Coauthors, 2010: A framework for assessing operational Madden–Julian oscillation forecasts: A CLIVAR MJO Working Group project. Bull. Amer. Meteor. Soc., 91, 12471258, https://doi.org/10.1175/2010BAMS2816.1.

    • Search Google Scholar
    • Export Citation
  • Gu, J., and Coauthors, 2018: Recent advances in convolutional neural networks. Pattern Recognit., 77, 354377, https://doi.org/10.1016/j.patcog.2017.10.013.

    • Search Google Scholar
    • Export Citation
  • Hasselmann, K., 1976: Stochastic climate models. Part I: Theory. Tellus, 28, 473485, https://doi.org/10.1111/j.2153-3490.1976.tb00696.x.

    • Search Google Scholar
    • Export Citation
  • Janiga, M. A., C. J. Schreck III, J. A. Ridout, M. Flatau, N. P. Barton, E. J. Metzger, and C. A. Reynolds, 2018: Subseasonal forecasts of convectively coupled equatorial waves and the MJO: Activity and predictive skill. Mon. Wea. Rev., 146, 23372360, https://doi.org/10.1175/MWR-D-17-0261.1.

    • Search Google Scholar
    • Export Citation
  • Jiang, X., and D. E. Waliser, 2009: Two dominant subseasonal variability modes of the eastern Pacific ITCZ. Geophys. Res. Lett., 36, L04704, https://doi.org/10.1029/2008GL036820.

    • Search Google Scholar
    • Export Citation
  • Kikuchi, K., 2020: Extension of the bimodal intraseasonal oscillation index using JRA-55 reanalysis. Climate Dyn., 54, 919933, https://doi.org/10.1007/s00382-019-05037-z.

    • Search Google Scholar
    • Export Citation
  • Kikuchi, K., B. Wang, and Y. Kajikawa, 2012: Bimodal representation of the tropical intraseasonal oscillation. Climate Dyn., 38, 19892000, https://doi.org/10.1007/s00382-011-1159-1.

    • Search Google Scholar
    • Export Citation
  • Kingma, D. P., and J. Ba, 2014: Adam: A method for stochastic optimization. arXiv, 1412.6980v9, https://doi.org/10.48550/arXiv.1412.6980.

  • Labe, Z. M., and E. A. Barnes, 2021: Detecting climate signals using explainable AI with single-forcing large ensembles. J. Adv. Model. Earth Syst., 13, e2021MS002464, https://doi.org/10.1029/2021MS002464.

    • Search Google Scholar
    • Export Citation
  • Lau, K.-M., and P. H. Chan, 1986: Aspects of the 40-50 day oscillation during the Northern summer as inferred from outgoing longwave radiation. Mon. Wea. Rev., 114, 13541367, https://doi.org/10.1175/1520-0493(1986)114<1354:AOTDOD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • LeCun, Y., Y. Bengio, and G. Hinton, 2015: Deep learning. Nature, 521, 436444, https://doi.org/10.1038/nature14539.

  • Li, S.-F., F. M. B. Jacques, R. A. Spicer, T. Su, T. E. V. Spicer, J. Yang, and Z.-K. Zhou, 2016: Artificial neural networks reveal a high-resolution climatic signal in leaf physiognomy. Palaeogeogr. Palaeoclimatol. Palaeoecol., 442, 111, https://doi.org/10.1016/j.palaeo.2015.11.005.

    • Search Google Scholar
    • Export Citation
  • Liebmann, B., and C. Smith, 1996: Description of a complete (interpolated) outgoing longwave radiation dataset. Bull. Amer. Meteor. Soc., 77, 12751277, https://doi.org/10.1175/1520-0477-77.6.1274.

    • Search Google Scholar
    • Export Citation
  • Liu, Y., and Coauthors, 2016: Application of deep convolutional neural networks for detecting extreme weather in climate datasets. arXiv, 1605.01156v1, https://doi.org/10.48550/arXiv.1605.01156.

  • Lybarger, N. D., C.-S. Shin, and C. Stan, 2020: MJO wind energy and prediction of El Niño. J. Geophys. Res. Oceans, 125, e2020JC016732, https://doi.org/10.1029/2020JC016732.

    • Search Google Scholar
    • Export Citation
  • Madden, R. A., and P. R. Julian, 1971: Detection of a 40–50 day oscillation in the zonal wind in the tropical Pacific. J. Atmos. Sci., 28, 702708, https://doi.org/10.1175/1520-0469(1971)028<0702:DOADOI>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Madden, R. A., and P. R. Julian, 1972: Description of global-scale circulation cells in the tropics with 40–50 day period. J. Atmos. Sci., 29, 11091123, https://doi.org/10.1175/1520-0469(1972)029<1109:DOGSCC>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Mayer, K. J., and E. A. Barnes, 2021: Subseasonal forecasts of opportunity identified by an explainable neural network. Geophys. Res. Lett., 48, e2020GL092092, https://doi.org/10.1029/2020GL092092.

    • Search Google Scholar
    • Export Citation
  • Moghim, S., and R. L. Bras, 2017: Bias correction of climate model temperature and precipitation using artificial neural networks. J. Hydrometeor., 18, 18671884, https://doi.org/10.1175/JHM-D-16-0247.1.

    • Search Google Scholar
    • Export Citation
  • Murphy, A., 1995: The coefficients of correlation and determination as measures of performance in forecast verification. Wea. Forecasting, 10, 681688, https://doi.org/10.1175/1520-0434(1995)010<0681:TCOCAD>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Peng, G., H.-M. Zhang, H. P. Frank, J.-R. Bidlot, M. Higaki, S. Stevens, and W. R. Hakins, 2013: Evaluation of various surface wind products with OceanSITES buoy measurements. Wea. Forecasting, 28, 12811303, https://doi.org/10.1175/WAF-D-12-00086.1.

    • Search Google Scholar
    • Export Citation
  • Rumelhart, D. E., G. E. Hinton, and R. J. Williams, 1986: Learning representation by back-propagating errors. Nature, 323, 533536, https://doi.org/10.1038/323533a0.

    • Search Google Scholar
    • Export Citation
  • Saha, S., and Coauthors, 2014: The NCEP Climate Forecast System version 2. J. Climate, 27, 21852208, https://doi.org/10.1175/JCLI-D-12-00823.1.

    • Search Google Scholar
    • Export Citation
  • Serra, Y. L., X. Jiang, B. Tian, J. Amador-Astua, E. D. Maloney, and G. N. Kiladis, 2014: Tropical intraseasonal modes of the atmosphere. Annu. Rev. Environ. Resour., 39, 189215, https://doi.org/10.1146/annurev-environ-020413-134219.

    • Search Google Scholar
    • Export Citation
  • Stan, C., D. M. Straus, J. S. Frederiksen, H. Lin, E. D. Maloney, and C. Schumacher, 2017: Review of tropical-extratropical teleconnections on intraseasonal time scales. Rev. Geophys., 55, 902937, https://doi.org/10.1002/2016RG000538.

    • Search Google Scholar
    • Export Citation
  • Toms, B. A., E. A. Barnes, and I. Ebert-Uphoff, 2020: Physically interpretable neural networks for the Geosciences: Applications to Earth System variability. J. Adv. Model. Earth Syst., 12, e2019MS002002, https://doi.org/10.1029/2019MS002002.

    • Search Google Scholar
    • Export Citation
  • Toms, B. A., K. Kashinath, Prabhat, and D. Yand, 2021: Testing the reliability of interpretable neural networks in geoscience using Madden-Julian oscillation. Geosci. Model Dev., 14, 44954508, https://doi.org/10.5194/gmd-14-4495-2021.

    • Search Google Scholar
    • Export Citation
  • Vitart, F., and Coauthors, 2017: The Subseasonal to Seasonal (S2S) Prediction project database. Bull. Amer. Meteor. Soc., 98, 163173, https://doi.org/10.1175/BAMS-D-16-0017.1.

    • Search Google Scholar
    • Export Citation
  • Waliser, D., and Coauthors, 2009: MJO simulation diagnostics. J. Climate, 22, 30063030, https://doi.org/10.1175/2008JCLI2731.1.

  • Wang, W., P. H. A. J. M. Van Gelder, J. K. Vrijling, and J. Ma, 2006: Forecasting daily streamflow using hybrid ANN models. J. Hydrol., 324, 383399, https://doi.org/10.1016/j.jhydrol.2005.09.032.

    • Search Google Scholar
    • Export Citation
  • Wheeler, M. C., and H. H. Hendon, 2004: An all-season real-time multivariate MJO index: Development of an index for monitoring and prediction. Mon. Wea. Rev., 132, 19171932, https://doi.org/10.1175/1520-0493(2004)132<1917:AARMMI>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Wibawa, A. P., A. B. P. Utama, H. Elmunsyah, U. Pujianto, F. A. Dwiyanto, and L. Hernandez, 2022: Time-series analysis with smoothed convolutional neural network. J. Big Data, 9, 44, https://doi.org/10.1186/s40537-022-00599-y.

    • Search Google Scholar
    • Export Citation
  • Willmott, C. J., 1981: On the validation of models. Phys. Geogr., 2, 184194, https://doi.org/10.1080/02723646.1981.10642213.

  • Willmott, C. J., 1982: Some comments on the evaluation of the model performance. Bull. Amer. Meteor. Soc., 63, 13091313, https://doi.org/10.1175/1520-0477(1982)063<1309:SCOTEO>2.0.CO;2.

    • Search Google Scholar
    • Export Citation
  • Yoo, C., Y. Lee, D. Cho, J. Im, and D. Han, 2020: Improving local climate zone classification using incomplete building data and sentinel 2 image based on convolutional neural networks. Remote Sens., 12, 3552, https://doi.org/10.3390/rs12213552.

    • Search Google Scholar
    • Export Citation
  • Zhang, H.-M., J. J. Bates, and R. W. Reynolds, 2006: Assessment of composite global sampling: Seas surface wind speed. Geophys. Res. Lett., 33, L17714, https://doi.org/10.1029/2006GL027086.

    • Search Google Scholar
    • Export Citation
  • Zhou, D.-X., 2020: Universality of deep convolutional neural networks. Appl. Comput. Harmonic Anal., 48, 787794, https://doi.org/10.1016/j.acha.2019.06.004.

    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    The architecture of the CNN-based filter. All layers have the same size, which is the sample size. The dashed–dotted line denotes the kernel size. In the first convolutional layer, the kernel size p = 90 and in the second convolutional layer p = 30. Grid = lon × lat. Time represents the number of samples (days). In each layer, a rectangle represents one grid point. The horizontal arrows show the workflow of the algorithm excluding the hidden layers in the convolutional layers. The gray line connecting the input layer and the subtraction layer denotes that input data can be passed to the subtract layer from the input layer.

  • Fig. 2.

    Training and validation loss for the OLR data at a grid point located at 160°E on the equator.

  • Fig. 3.

    Hovmöller diagrams averaged over 7.5°S–7.5°N for the testing period 1 Jan 2015–31 Dec 2015. (top) Zonal wind stress (N m−2) and (bottom) OLR (W m−2). (a),(e) The total daily anomaly of the fields. (b),(f) The 30–90-day filtered anomalies using the Lanczos filter. (c),(g) The 30–90-day filtered anomalies using the CNN-based filter. (d),(h) The Lanczos filtered anomalies minus the CNN-based filtered anomalies.

  • Fig. 4.

    Hovmöller diagrams of the effectiveness of the CNN-based filtering model measured by (a) ME and (b) standard deviation (σ) obtained by resampling the training, validation, and testing periods into six folds. The domain is an average over 7.5°S–7.5°N.

  • Fig. 5.

    (a) Time series of 30–90-day filtered anomalies using the Lanczos filter (red solid curve) and the CNN-based filter (blue dashed line) averaged over 125°–160°E; 7.5°S–7.5°N for the testing period 1 Jan 2015–31 Dec 2016 along with their correlation coefficient (r). (b) The power spectrum density times frequency of filtered anomalies shown in (a). (c) IOA and (d) RMSE between the 7.5°S and 7.5°N averaged filtered anomalies using the Lanczos and CNN-based filters during the testing period 1 Jan 2015–31 Dec 2016.

  • Fig. 6.

    As in Fig. 5, but for OLR anomalies averaged over 160°–190°E, 7.5°S–7.5°N.

  • Fig. 7.

    R2 of the filtered anomalies using the Lanczos and CNN-based filters during 1 Jan 2015–31 Dec 2016 for (a) zonal wind stress and (b) OLR. Time series at each zonal grid point represent the average between 7.5°S and 7.5°N.

  • Fig. 8.

    Longitude–time plots of (a) IOA between τxMJO based on Lanczos-filtered daily anomalies (τx3090(L)) and τxMJO based on unfiltered daily anomalies (τx). (b) IOA between τxMJO based on Lanczos-filtered daily anomalies and τxMJO based on CNN-filtered daily anomalies (τx3090(CNN)). (c) The difference between (b) and (a). See text for details.

  • Fig. 9.

    Hovmöller diagrams of daily OLR filtered anomalies (shading) and MJO signal (contours) averaged over 7.5°S–7.5°N for the period 1 Sep 2018–31 Mar 2019. Above the gray dashed line, anomalies are filtered using a 25–90-day Lanczos filter. Below the gray dashed line, anomalies are filtered using (a) the proxy method (see text for details) and (b) the CNN-based method (right). The gray line denotes the last date for which data would be available for the proxy method. In both cases the MJO is computed following Kikuchi (2020) method.

  • Fig. 10.

    (top) Comparison of MJO amplitude (PC1 and PC2) and (bottom) phase space for the period 1 Sep 2018–31 Mar 2019. The black line denotes the Lanczos filter, and the calculation is done assuming availability of data in the future. The blue and red lines correspond to the proxy and CNN calculations. The vertical line on 1 Jan 2019 denotes the date after which no data would be available for applying the Lanczos filtering.

  • Fig. 11.

    Power density spectra using a 25–90-day Lanczos filter (red solid line), Kikuchi et al. (2012) proxy method (blue dashed line), and CNN-based method (red dashed line). (a) Daily OLR anomalies. (b) MJO component of the daily OLR anomalies. Power spectra are calculated for the time series of area average over 160°–190°E; 7.5°S–7.5°N between 1 Sep 2018 and 31 Mar 2019.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 12818 9907 225
PDF Downloads 933 302 30