A data-assimilating ⅓° regional dynamical ocean model is evaluated on its ability to synthesize components of the Tropical Pacific Ocean Observing System. The four-dimensional variational data assimilation (4DVAR) method adjusts initial conditions and atmospheric forcing for overlapping 4-month model runs, or hindcasts, that are then combined to give an ocean state estimate for the period 2010–13. Consistency within uncertainty with satellite SSH and Argo profiles is achieved. Comparison to independent observations from Tropical Atmosphere Ocean (TAO) moorings shows that for time scales shorter than 100 days, the state estimate improves estimates of TAO temperature relative to an optimally interpolated Argo product. The improvement is greater at time scales shorter than 20 days, although unpredicted variability in the TAO temperatures implies that TAO observations provide significant information in that band. Larger discrepancies between the state estimate and independent observations from Spray gliders deployed near the Galápagos, Palau, and Solomon Islands are attributed to insufficient model resolution to capture the dynamics in strong current regions and near coasts. The sea surface height forecast skill of the model is assessed. Model forecasts using climatological forcing and boundary conditions are more skillful than climatology out to 50 days compared to persistence, which is a more skillful forecast than climatology out to approximately 20 days. Hindcasts using reanalysis products for atmospheric forcing and open boundary conditions are more skillful than climatology for approximately 120 days or longer, with the exact time scale depending on the accuracy of the state estimate used for initializing and on the reanalysis forcing. Estimating the model representational error is a goal of these experiments.
Estimation of the tropical Pacific Ocean state is important for seasonal to interannual predictability (McPhaden et al. 1998; Guilyardi et al. 2009). Variability of the thermocline depth and the propagation of Kelvin waves along the equator play important roles in El Niño–Southern Oscillation (ENSO), driving the need for observing subsurface ocean temperature, among other variables (Capotondi et al. 2015). There is a large observing system in the tropical Pacific, including the Tropical Atmosphere Ocean/Triangle Trans-Ocean Buoy Network (TAO/TRITON) mooring array (http://www.pmel.noaa.gov/tao/), Argo profiling floats (http://www.argo.ucsd.edu), Spray gliders (http://spray.ucsd.edu/), and many other in situ and satellite observations. However, there is no consensus on what is sufficient to understand the ocean–atmosphere dynamics in the region in order to, for example, make accurate ENSO predictions.
Ocean observing systems are frequently evaluated by making estimates of ocean state (“reanalysis”) and comparing them to withheld observations (Yan et al. 2007; Fujii et al. 2015a,b; Oke et al. 2015). Errors in reanalysis products include both formal mapping error and representation error. Formal mapping error arises primarily from lack of information, such as sparse or noisy observations. Representation error comes from low resolution, missing physics, or errors in the model–data synthesis methodology. Data withholding experiments quantify the impact of components of the Tropical Pacific Observing System (TPOS) on the estimates. However, mapping methods vary substantially, primarily with regard to how covariance estimates are determined and the level of physics, such as circulation models, that is included. Representation error also comes from mapping resolution and the choice to consider the ocean mesoscale as a signal to be mapped (in our case) or noise. It is also useful to evaluate the contribution of representation error incurred from the choice of model, the resolution, and the assimilation method.
It is hypothesized that including dynamical and thermodynamic constraints from a numerical model in the analysis should improve the ocean state estimates, which remains to be verified. In this work, the performance of a four-dimensional variational data assimilation (4DVAR) state estimate enforcing consistency with model equations over 4-month periods at mesoscale resolution is quantified. The fit tests the hypothesis that the model can be made consistent with the observations within the assumed representational errors. The benefit of including model physics is evaluated by examining the misfit between the state estimate and the assimilated observations, by cross validating against withheld datasets (Spray gliders and the TAO/TRITON array), and by comparing the skill of the state estimate against the skill of objectively mapped Argo products, such as the products by Roemmich and Gilson (2009, hereafter RG09) and Gasparin et al. (2015, hereafter G15). The goal of this work is to evaluate the combined mapping and representation error against dependent and independent observations to determine the benefit of this 4DVAR method in estimating subsurface temperature and salinity in the tropical Pacific. The quality of the optimization and model is also evaluated by using the optimized state to forecast the next few months to compare against future observations not used in the assimilation. These tests explore the predictability of the tropical Pacific Ocean state using an ocean-only model, assessing the information content of the ocean state (as opposed to atmospheric forcing) and temporal coherence.
Evaluating model representational errors is an important component of the design of data assimilation systems (Karspeck 2016). Some components of the representational error are obvious, such as the tidally driven internal waves in a model without tides, subgrid-scale features, and nonhydrostatic processes, but the line between eddies that are “permitted” or “resolved” and errors due to the discretized topography, approximate boundary layers, and numerical diffusion is more difficult to quantify. If a dense array of perfect observations was available at every time step, it would be easy to initialize the model from the completely observed field and compare it to the next completely observed field to estimate model errors. This is rarely achieved and representation errors are instead part of the hypothesis represented by the model and its parameterizations. In other words, the model is proposed as a representation of the true physics within error bars and the fitting done by the assimilation is an attempt to falsify the hypothesis. If the data are inadequate, then the hypothesis is not tested: redundant observations are needed to check the dynamics. So, in the absence of sufficient observations, we must iterate, trying a guessed representation error for a particular model and region, doing the fit, and then checking the residuals to see whether they are consistent with the hypothesized representational error. If they are too large, then the hypothesis is rejected: the model cannot fit the observations within the assumed error bars. If they are smaller than expected, perhaps the error bars could be reduced, but this could also be because the observations are too sparse, and Bayesian analysis does not allow altering the prior and redoing the estimation, although this is sometimes done in a rough way.
The TPOS 2020 project recommends the use of data assimilation to combine observations and to assess the design of the TPOS. A necessary first step in this procedure is to have a measure of the errors and performance of the assimilation systems. In this case, we evaluate the performance of a 4DVAR system as a necessary step to inform use of the output for dynamical analysis or for data impact studies.
Using the machinery developed by the consortium for Estimating the Circulation and Climate of the Ocean (ECCO; Wunsch and Heimbach 2013), we have developed an adjoint-based assimilation system to estimate the state of the tropical Pacific Ocean following the work of Hoteit et al. (2008, 2010). The adjoint model enables optimization of the prescribed initial temperature and salinity, and the atmospheric exchanges of momentum, heat, and freshwater in order to bring the forward general circulation model solution into consistency with constraining TPOS datasets.
a. Model setup
We use a regional configuration of the Massachusetts Institute of Technology General Circulation Model (MITgcm) (Marshall et al. 1997). The domain extends from 26°S to 30°N and from 104°E to 68°W (Fig. 1). The eddy-permitting model setup has ⅓° resolution and 51 thickness-varying vertical levels (5-m vertical resolution in the upper ocean). Bathymetry is derived from ETOPOv2 (https://www.ngdc.noaa.gov/mgg/global/etopo2.html). Lateral open-ocean boundary conditions are prescribed from the global ½° reanalysis from the Hybrid Coordinate Ocean Model with Navy Coupled Ocean Data Assimilation (HYCOM/NCODA; http://hycom.org). The vertical mixing is parameterized by the KPP formulation of Large et al. (1994). Mixing parameter values used in the setup are given in Table 1. Initial conditions and prescribed atmospheric state are control parameters and are discussed in section 2d.
The horizontal resolution is sufficient to resolve key dynamic balances in the tropical Pacific (Hoteit et al. 2008). Current observations were not assimilated, but simulated equatorial currents are consistent with the observations of Johnson et al. (2002) along several meridional sections (Fig. 2) although they cover a different time range. The main eastward flows (blue shading), the Equatorial Undercurrent centered at the equator, and the North Equatorial Countercurrent centered around 7°N are represented well. The model also captures the mean position of the South Equatorial Current (red shading at 0–100 m), though the northern branch is weaker than the observations. The Equatorial Intermediate Current observed in the western part of the basin around 1°N at 350 m is also weak in the model. The deep eastward jets on the western sections are known to be hard to reproduce without low diffusivity (McCreary et al. 2002; Furue et al. 2007, 2009). The mesoscale is resolved in the region of our analysis (17°S–17°N), but small-scale eddies are not represented. Even with higher model resolution, finescale structures would be damped due to the viscosity values chosen to suppress instabilities in the adjoint model.
b. Data assimilation
Data assimilation is done via the adjoint method, also known as 4DVAR, using the methodology from the ECCO consortium (Wunsch and Heimbach 2013). The “cost function” to be minimized is a weighted sum of quadratic norms of model–data misfit and changes to the control variables. The weights on the individual cost function terms are the inverse of the combined constraint and model representation error variance. An efficient cost function descent is facilitated by the existence of an adjoint model, which is readily attained because the MITgcm has been designed to enable computer generation of its adjoint model using the algorithmic differentiation tool Transformation of Algorithms in Fortran (TAF) (Giering and Kaminski 1998; Heimbach et al. 2002). The adjoint model is used to determine the gradient of the cost function with respect to a set of control parameters. The initial conditions and atmospheric state control parameters are discussed in section 2d.
The adjoint model is a tangent linearization of the general circulation model, and the accuracy of the adjoint-derived gradients, and thus the success of the optimization, depends on the degree of nonlinearity of the model solution. The degree of solution nonlinearity, however, is scale specific. One can focus on the largest scale ocean dynamics, which tend to be linear, by using a coarse model resolution or by increasing the parameterized diffusivity and viscosity (e.g., Köhl et al. 2007). This also has the effect of transferring mapping error to representation error by filtering out smaller scales. To best utilize the model dynamical constraints and all the available data constraints, assimilation windows should be chosen to be as long as possible before nonlinearities dominate and control deteriorates. Thus, assimilation window duration depends on the model setup and the goals of the estimation. For the present work, where we wish to resolve the ocean mesoscale, we found, after a series of experiments with state estimates ranging from one month to one year, that a 4-month assimilation window was optimal. This window length allowed efficient fitting of the observations while still allowing the model physics to be of primary relevance. However, as in Hoteit et al. (2005) and Köhl et al. (2007), we increased viscosity in the adjoint model (Table 1) to attenuate growing sensitivities at small scales.
The 4-month assimilation window is long enough for observational constraints to propagate information through the domain and influence atmospheric forcing but short enough that the location and timing of many mesoscale eddies and planetary waves can be captured by adjusting initial conditions. These 4-month assimilations were carried out every 2 months, meaning that at any given time there are two solutions. The solutions are patched together with a linear combination over a 60-day period centered in the middle of the overlap region.
c. Observational constraints and uncertainties
This subsection describes the observational constraints used to estimate the state of the tropical Pacific in the domain shown in Fig. 1: 17°S to 17°N and east of 130°E, excluding the shallow area west of Papua New Guinea. Temperature and salinity data from Argo profiling floats are the most abundant source of in situ observations. For the period January 2010–December 2013, there are 42 814 temperature profiles and 42 224 salinity profiles in the data assimilation region.
Shipboard conductivity–temperature–depth (CTD) profiles (obtained from http://cchdo.ucsd.edu) and expendable bathythermograph (XBT) profiles (obtained from http://www-hrx.ucsd.edu) provide additional in situ constraints. Each observation is given an uncertainty that is estimated based on a first-guess representation error, which dominates over instrument error. As a starting estimate of representation error, the root-mean-square (RMS) misfit between monthly RG09 maps and Argo profiles is optimally interpolated. We do not account for formal mapping error in the RG09 product. As such, where the RG09 solution is well constrained we are constraining the model to be as consistent with the Argo profiles as is the RG09 product. However, we may be prescribing an incorrectly reduced uncertainty in poorly constrained regions. The prescribed values used range from 0.05°C at depth to approximately 1.2°C in the thermocline. For salinity, the uncertainty is 0.01 at depth and reaches approximately 0.15 in the halocline.
Along-track altimetry provides the primary satellite constraint. We constrain to sea surface height (SSH) observations from Jason-1, Jason-2, the Environmental Satellite (Envisat), Cryosat-2, and the Ka-Band Altimeter (AltiKa), fitting mean and anomaly separately. SSH anomalies are obtained from the Radar Altimeter Database System (RADS; Scharroo et al. 2013) and bin-averaged daily onto the model grid. The mean of the SSH observations during an assimilation window is given a smaller weight than the anomalies due to uncertainty associated with the geoid. The SSH undergoes careful quality control prior to the optimization. The procedure is relatively conservative, as problematic observations are damaging to the efficiency of the optimization. The procedure is as follows. Locations with fewer than two observations per year are removed from the dataset because these will not provide a strong constraint. SSH observation in locations shallower than 500 m are also removed because we expect both increased observational error due to tides and increased model representational error in these regions. For a given location, the SSH time series is processed and anomalies that exceed five standard deviations are removed. The time series is also compared to the AVISO SSH product, and observations that have a misfit larger than three standard deviations from this product are removed. A uniform uncertainty of 3 cm is assigned for the satellites with repeating orbits (Jason-1 and Jason-2), based on Ponte et al. (2007). The uncertainty is increased to 6 cm for nonrepeating orbits (Envisat, Cryosat-2, and AltiKa) because of the additional model error caused by binning SSH observations taken from different locations into the same model grid box. The mean dynamic topography constraint, which comes from the Danish National Space Center DNSC08 mean dynamic topography (MDT) (Andersen and Knudsen 2009), is assigned an uncertainty of 10 cm, primarily reflecting error in the geoid estimate (Pavlis et al. 2012).
Daily maps of sea surface temperature (SST) are derived from microwave radiometers and optimally interpolated by Remote Sensing Systems Inc. (http://www.remss.com/). This product is used as a constraint, with uncertainty prescribed from the RMS difference of Argo profiles with the RG09 product near the ocean surface. However, the daily SST values mapped to the model grid are not temporally or spatially independent, although they are treated as such by the assimilation. This means that the effective uncertainty of the observations is incorrectly reduced by a factor of 1/√(Nr), where Nr is the number of observations that are not independent, which we estimate as 100 based on 1° and 10-day correlation scales. In addition, the uncertainty of the depth of the surface layer that has the observed skin temperature and in the removal of the diurnal cycle can produce errors of degrees in the tropics. Therefore, the uncertainty is multiplied by a factor of 10 to account for the redundancy of observations and the extra representational errors. This reduces the impact of the remotely sensed SST.
d. Controls and uncertainties
As noted above, initial conditions and the atmospheric state are adjusted to bring the model solution into consistency with observational constraints. Initial conditions for the first assimilation window are obtained from the ECCO, version 4 (ECCOv4), global state estimate (Forget et al. 2015). For subsequent assimilations, the prior initial condition comes from the previous state estimate. The uncertainty is estimated from the RG09 product just like for in situ observations but multiplied by a factor of 10 to allow for significant departures from the prior coarse-resolution ECCOv4 product. A correlation scale of 250 km in the zonal direction, 60 km in the meridional direction, and 10 m in the vertical is imposed via a smoothing operator on initial condition adjustments (Forget et al. 2015).
First-guess simulations were forced with the 6-hourly atmospheric state from the ERA-Interim data produced by the European Centre for Medium-Range Weather Forecasts (ECMWF) (Dee et al. 2011). The surface wind vector, air temperature, shortwave radiation, and air humidity are optimized. These fields are constrained to remain within uncertainty bounds of the ERA-Interim. The uncertainty is proportional to the spatially varying RMS of 6-hourly ECMWF fields. Controls are applied over 10-day periods. A spatial correlation scale of 500 km zonally and 120 km meridionally is imposed for the forcing adjustments via a smoothing operator.
3. Optimized model state
That we are able to reduce the cost is evidence that the adjoint gradients are at least partially valid and that the method is working. True success is measured by the statistics of the normalized model–data differences. We discuss these model–data differences below. We note that the normalized difference statistics depend on the a priori uncertainties assigned. As stated above, our uncertainties were derived from the RG09 product, which is expected to be generally too large. An advantage is that this derivation of uncertainty is reproducible. However, with the insight attained regarding the assimilation performance, a next step would be to refine these a priori uncertainties. For each 4-month assimilation, we have a “prior” solution (or iteration 0) that is the forward model run with atmospheric forcing from ERA-Interim and unadjusted initial conditions. The state estimate is the solution of the forward model run with optimized forcing and initial conditions, obtained by iteratively running the adjoint model and adjusting controls until the cost penalty of further increasing controls becomes roughly equal to the cost change from reducing the misfit with observations. Typically, we reach this in 10–20 iterations, during which the cost steadily descends (not shown).
As mentioned above, the initial conditions for the first assimilation (January 2010) are obtained from ECCOv4, but for all other assimilations we use the optimized state from the previous assimilation (e.g., the initial conditions for the March–June 2010 assimilation are obtained from the January–April 2010 optimized state). For comparison, we also ran the assimilations starting in 2010 and 2011 with ECCOv4 initial conditions; we found that the prior cost was reduced by 42% on average when initializing the model with the solution from the previous state estimate as compared to ECCOv4. This number is a measure of the enhanced fit to the observations by the forecast from the optimized state. As a result, the cost descent is faster when assimilations are initialized with the revised state. See also our discussion of the forecasts (section 5) for a comparison of forward runs initialized with ECCOv4 and with the previous state estimate.
To better understand how the optimization improves the fit to observations, we examine the normalized cost associated with each observational source. The normalized cost is calculated as the weighed squared misfit divided by the number of independent observations within the assimilation period (Fig. 3). The average is calculated over all assimilations over the period 2010–13. Vertical error bars indicate the standard deviation of the normalized cost, and the star symbols indicate the minimum and maximum values. A value of 1 for the normalized cost means that the average misfit is equal to the prescribed uncertainty and thus that the solution is acceptable. However, having a satisfactory solution also requires the absence of large-scale patterns of high/low misfits. Figure 4 shows the mean and standard deviation of the spatially binned normalized model-observations misfit to Argo, and confirms the misfit structure has no regional biases.
The difference in normalized cost between the prior solution and the state estimate gives an indication of the usefulness of the data assimilation method. In our case, since the prior ocean state estimate comes from the optimized solution for the previous assimilation period and the 4-month assimilation periods overlap by 2 months, the prior solution is already influenced by some of the observations. Still, it is interesting to examine how much the cost drops as a result of the assimilation. We find that the normalized cost for in situ observations used as constraints (Argo, CTD, XBT) is reduced on average by 34%–46%. Average normalized cost is reduced by 20% for SSH and 22% for SST. The cost from independent datasets is not included in the total cost calculation, but it is still evaluated and monitored. For TAO temperature, we get an improvement of 30%. The improvement is smaller for Spray observations, with 19% for temperature and 5% for salinity, likely due to the location of the glider samples near complex topographic features and the relatively coarse model resolution.
The SSH anomalies time series in Fig. 5 show that the estimated SSH agrees well with the AVISO product along the equator. Note that the model is constrained directly to the RADS along-track observations, not to the AVISO product. SSH anomaly snapshots in Fig. 6 show how well the mesoscale variability is captured.
Figure 7 shows that SST is well captured in spite of the large uncertainty assigned to it. This is partly because SST is easily fit to observations by adjusting surface heat fluxes, whereas fitting SSH requires adjustments to temperature and salinity throughout the water column. The SST snapshots show how the state estimate captures tropical instability waves and could be used to study these dynamical features.
By optimizing the initial conditions and surface boundary conditions, we are able to bring our solution into consistency with the observations (Figs. 6, 7). Because the ocean model physics are a hard constraint, there is no guarantee that we can achieve this consistency. Eddies are produced through instabilities and surface forcing and then propagate according to model dynamics; they would not fit the data if the dynamics were significantly flawed. Thus, we find that the model dynamics are adequate to fit the observations within the prescribed uncertainty and to perform well against withheld observations. Reducing the prior uncertainty would test the dynamics more stringently and determine whether the model can represent even more of the observed signals.
4. Validation with independent datasets
Additional skill assessment for the 4-yr state estimate is made by comparing the temperature at 100 m to the observed values from the TAO array, which was not a constraint in producing the state estimate. Time series are shown in Fig. 8.
A quantitative assessment of the results for the equatorial moorings is given in Table 2. The correlation between our state estimate and the daily TAO data is compared with the correlation between Argo mapped products and TAO. Three Argo products are used. The first is RG09 (a version produced for the tropical Pacific, with 10-day averaged maps produced every 5 days). The second is G15, a new mapping of Argo data; it is similar in methodology to RG09 but employs a more accurate representation of the space–time covariance of the data. The third product, also produced by G15, incorporates altimetric data into the mapping; as an experimental product, it is currently available at the location of a single TAO mooring (0°N, 140°W). The statistical significance of the correlations is calculated at each location, with the number of daily observations divided by the decorrelation time scale estimated from the autocorrelation of the state estimate. It should be noted that because of the limited time range of the assimilation and gaps in the observations, the low-frequency variability has few realizations and so the statistical significance is low. For the daily, high-frequency, and intermediate-frequency variability, only correlations that are significant above the 95% level are reported.
Both the state estimate and the Argo maps perform reasonably well, with correlation coefficients for raw time series ranging from 0.78 to 0.93. To further assess the skill, the variability is separated into high frequencies (less than 20 days), intermediate frequencies (20–100 days), and low frequencies (more than 100 days) using a simple running-mean filtering method. The results highlight how the state estimate captures the high-frequency variability that is not resolved by either the RG09 or G15 product, both of which are by design smooth in time.
For the high-frequency variability, correlation coefficients for the state estimate are in the range 0.27–0.56 (compared to nonstatistically significant to 0.45 for Argo products). Even in the intermediate-frequency (20–100 days) band, the state estimate outperforms the Argo products with the correlation being higher at 10 of the 11 equatorial moorings. At low frequency, the RG09 product performs better at 8 of the 11 moorings, but the correlations are high for both products (0.92–0.98 for the state estimate, 0.91–0.99 for Argo) and the significance is limited by the number of independent realizations, as mentioned above. We have also calculated the fractional error variance as
Figure 9 shows the fractional error variance for all TAO moorings at 100 m, with light-colored circles denoting high error. Large fractional errors at some of the moorings are often explained by low variance or short time series at those locations (Fig. 10). The performance of the state estimate is compared to the performance of RG09; similar results are obtained with G15. For example, at 0°N, 140°W, with all frequencies included the fractional error variance is 0.29 for RG09, 0.28 for G15, and 0.26 for G15 with altimetry. The state estimate has a lower fractional error variance of 0.21.
The state estimate is also compared to data from Spray gliders. There are significant discrepancies (Fig. 3), as the model has poor skill in the regions where the gliders sampled, which are near rough topography. Figure 11 shows the daily-averaged misfits between the state estimate and glider observations near the Galápagos, Palau, and Solomon Islands. The fractional error variance for 100-m temperature for each region is 2.1 (Galápagos), 0.4 (Palau), and 0.5 (Solomon). For 100-m salinity it is 0.8 (Galápagos), 1.0 (Palau), and 0.9 (Solomon). The large errors indicate that glider observations provide important information that the state estimate does not capture. Future work will involve using those observations as constraints to quantify how much the solution can be improved. Because the gliders are deployed in boundary regions where one expects the generation of instabilities and waves, the information may propagate into the domain and help constrain the solution over a large area of the tropical Pacific. Higher model resolution may also be needed to adequately reproduce these observations.
Forecasting sea surface elevation can be used as another test of the skill of the model and the state estimate. We use different methods, described below, to predict SSH over a 120-day period and to compare the predictions to the AVISO product. The forecast error is calculated as the RMS misfit in daily SSH averaged over the Pacific from 17°S to 17°N. For the period 2010–13, we have 22 realizations of 120-day forecasts; the results are averaged over all realizations and shown in Fig. 12.
Our forecast is produced by initializing the model with the final state of a 4-month assimilation and then simulating the next 120 days using climatological forcing at the surface and open boundaries. The error at the beginning of the forecast is low, because initial conditions were obtained from the state estimate, which has an average error of ~3.5 cm throughout the assimilation period, even though it did not use the later data. The forecast error grows in time to reach 5.5 cm by day 120. For reference, we also use the AVISO climatology as a forecast, which yields an approximately 6-cm RMS error. Note that for individual realizations, the errors range from 3 to 8.5 cm and the apparent slight decrease in this error over time is insignificant. By day 50, our forecast error exceeds the error associated with the climatology forecast. The “persistence” forecast, which is produced by assuming that SSH remains constant in time, has a greater error than climatology after 25 days. The fact that the error for our forecast grows more slowly than for the persistence forecast indicates that the model dynamics, driving the evolution of waves and eddies, are an important part of the forecast skill.
Not surprisingly, we can improve our forecast by replacing climatological forcing with estimates from the ERA-Interim atmospheric state and HYCOM lateral boundary conditions. This requires knowledge of future conditions and is therefore not a true forecast, but it isolates the role of the forcing in forecasting. In our case, it reduces the error growth from 3 cm per 120 days for the true forecast to 1.5 cm per 120 days for this “prior model simulation.”
Similarly, forecast skill informs the benefit that assimilation has in bringing the ocean state into consistency with observations. An unoptimized forward model run may be initialized from a global ocean assimilation product, such as HYCOM or ECCOv4. The latter is available through 2011 and was used to initialize some forward model runs using prior forcing from ERA-Interim. As mentioned above, the error is greater using ECCOv4 than with initial conditions coming from the previous assimilation window of our regional state estimate (Fig. 12), allowing quantification of how much value is added to the forecast by the optimized initial state. Since we have a better first-guess solution, the optimized initial conditions also help the assimilation in that the cost minimum can be found more rapidly, and nonlinearity is presumably reduced.
Assimilating altimetric observations and other datasets to produce the state estimate reduces the error compared to the prior solution. The predictions are compared to the state estimates, which are hindcasts optimized to match within about 3-cm RMS error the along-track SSH observations. Since these observations are also used to create the AVISO product, the approximately 3-cm RMS difference observed between the hindcast and the mapped AVISO product is consistent with the error assumption. Note that, evident in Fig. 12, the state estimate error is largest at the beginning of the assimilation period and fluctuates for the first 5–10 days, reflecting an adjustment of the optimized initial conditions that causes a temporary “sloshing” of SSH across the domain. When we merge the 4-month estimates into a multiyear product, this artifact goes away.
In summary, the forecast error is initially due to the model state and its growth is attributed to the true ocean state evolving in response to nonseasonal forcing and to intrinsic variability developing. Comparing different initial conditions highlights the value of a good ocean state for forecasting skill. Comparing different forcing scenarios (climatology, prior, and optimized) highlights the importance of interannual/synoptic variability. Comparing the forward model run to the persistence forecast highlights the importance of model dynamics in addition to the forcing.
6. Conclusions and summary
This work quantifies how successfully the data-assimilating model can improve the estimated state of the tropical Pacific. We have focused our analysis on comparisons with SST, AVISO, mapped Argo products, and TAO. SST is readily fit due to the quick equilibration time of upper-ocean temperature to atmospheric state, which is in our control space. SSH is a more stringent constraint, because fitting the observations requires adjusting subsurface properties, which also need to be consistent with in situ observations. The fact that our solution is in agreement within error bars with the AVISO product gives confidence in the state estimate. Note that we are not directly constraining to the AVISO product, but we are assimilating the same altimeter data used to make AVISO.
The RG09 product is a mapping of the irregularly spaced Argo profiles onto an evenly spaced grid. This estimate of subsurface properties using optimal interpolation is widely used, and it is available for the tropical Pacific during the period considered here, thus providing a natural choice for comparing to our state estimate. The mapped product is obtained by optimal interpolation with a 10-day assimilation window and a temporal resolution of 5 days, which means that a map is produced every 5 days using data from a 10-day period. It is not designed to resolve high-frequency variability and cannot be expected to capture the TAO variability on short time scales (Fig. 9; Table 2). This is where the state estimate brings in information that optimal interpolation cannot provide. The G15 mapping of Argo data is produced with a methodology similar to the RG09 product but with a more accurate representation of the space–time covariance of the data. A new mapping of Argo data by G15 is achieved by incorporating altimetric data into the mapping. By filling in information on shorter scales, the combined product should better capture the intraseasonal variability (20–100 days) of the independent TAO dataset than the Argo-only product. G15 show that their mapped product captures a large fraction of the variance of the TAO mooring at 0°N, 140°W over the period 2006–14: 95% and 77% of the variability at low and intermediate frequency, respectively. Over the 4-yr period considered here, however, its performance with respect to the equatorial TAO mooring is not very different from the performance of the RG09 product (Table 2), and the state estimate performs better at intermediate and high frequencies. This supports our hypothesis that there is value added by using a dynamical ocean model over a statistical ocean model in mapping available observations. Statistical mapping using optimal interpolation cannot capture all high-frequency dynamics because some time binning is always required in these methods.
Though we show skill in the state estimate, there are still inconsistencies between the estimated state and the data. The consistency of the state estimate with the 100-m temperature from the TAO array, which is the main independent dataset used for validation, tends to be better at the equator, where the stratification is moderate. The state estimate is less consistent at 6°–8°N, where the thermocline is very sharp and causes large fluctuations in the mooring data as it moves up and down. The state estimate is also inconsistent with much of the withheld Spray glider data, and the assimilation of other in situ and remote constraints does not improve the fit as much as one would hope. Future work will assimilate these currently independent datasets to determine what is needed to bring the model into consistency with them. An aspect of the optimization procedure worth testing is the impact of optimizing the vertical diffusion coefficients. Using this as a control should allow better representation of the thermocline. Some errors likely arise because the spatial resolution of ⅓° may be insufficient at resolving some of the important dynamics. Future work will test whether we can bring the model into better consistency with the high-frequency dynamics by optimizing with lower viscosity. It is hypothesized that using lower viscosity will allow better reproduction of the small-scale features, but it may require using a shorter assimilation window.
In summary, we used a numerical model to synthesize observations of the tropical Pacific and to generate a product that can be used to study the dynamics of the region. The main advantages over optimal interpolation are 1) high temporal resolution, which makes it capable of resolving features such as tropical instability waves; and 2) closed dynamical and thermodynamic budgets, allowing for study of the governing physics. Comparison to the withheld TAO array showed consistency at time scales greater than 20 days but degraded skill at shorter time scales. This is a measure of the unique information content provided by the TAO array. At long time scales we were able to reproduce the state by synthesizing other available observations with model physics, but the TAO array is necessary to constrain variability on short time scales. It will be informative to constrain the state estimate to the TAO array in the future. Indeed, production of the state estimate continues and will include more recent years; it will be useful to investigate phenomena such as the 2014/15 El Niño event. The state estimate is available online (at http://www.ecco.ucsd.edu/tropac.html).
AV and BC were supported by NOAA Grant NA13OAR4830216 through the Cooperative Institute for Marine Ecosystems and Climate (CIMEC). MM was supported by NSF Award PLR-1425989. We gratefully acknowledge use of data made available by the TAO Project Office of NOAA/PMEL (http://www.pmel.noaa.gov/tao), the U.S. NOAA/National Oceanographic Data Center (NODC) (http://www.nodc.noaa.gov/), the international Argo program and the national programs that contribute to it (http://www.argo.ucsd.edu; argo.jcommops.org), the USGODAE Argo Global Data Assembly Center (http://usgodae.org), the Scripps High Resolution XBT program (www-hrx.ucsd.edu), the Radar Altimeter Database System (http://rads.tudelft.nl/rads/rads.shtml), the CLIVAR and Carbon Hydrographic Data Office (http://cchdo.ucsd.edu/), and the SIO Instrument Development Group (http://spray.ucsd.edu). The Argo Program is part of the Global Ocean Observing System. A mapped Argo product is produced by D. Roemmich and J. Gilson (http://sio-argo.ucsd.edu/RG_Climatology.html). The AVISO (SSALTO/DUACS) altimeter products were produced and distributed by the Copernicus Marine Environment Monitoring Service (CMEMS; http://www.marine.copernicus.eu).