1. Introduction
The availability of large observational datasets and the increased importance of numerical models as scientific tools is leading to a growing use of data assimilation (DA). Application of DA to high-dimensional models in the geosciences in general, and to biogeochemical (BGC) ocean models in particular, have created significant improvements in model skill based on a variety of metrics (Edwards et al. 2015; Stammer et al. 2016). Yet, many opportunities for improvement remain for DA methods. In particular, some of the most widely used DA techniques require the specification of observation error and background error values, which have a strong influence on DA results but are difficult to determine objectively. In applications, these uncertainties are often based on heuristic approaches that can lead to inconsistencies (with respect to linear estimation theory) between specified error values and the response of the DA system. Recently, the diagnostics introduced in Desroziers et al. (2005) set the foundation for a variety of approaches to more objectively estimate observation and background error values for variational and ensemble DA systems.
Observation and background error values are typically specified in the forms of
The dimensions of
The diagnostics in Desroziers et al. (2005) (hereafter referred to as error covariance diagnostics) are based on linear estimation theory, which forms the theoretical basis of variational and many ensemble-based DA techniques [see, e.g., Talagrand (1999) for a summary]. They can be easily computed based on differences between the observation values and the background (prior) and analysis (posterior) model solutions at the observation locations and are, therefore, also often referred to as observation-minus-background, observation-minus-analysis, and analysis-minus-background statistics. Their computation requires an analysis solution and, thus, output from one or multiple DA experiments (a more detailed description of the error covariance diagnostics and their computation is presented in section 2c).
The error covariance diagnostics were originally introduced as consistency diagnostics, allowing for a relatively simple way to check the error specifications of DA systems. They have found a wide variety of applications in various DA systems. Early use of the diagnostics include the estimation of observational error values and inflation coefficients for an ensemble Kalman filter–based DA system (Li et al. 2009), the estimation of observational errors including error correlations (off-diagonal elements of
In practical applications, estimates of
In this study, we apply error-covariance-diagnostics-based covariance adjustments to a four-dimensional variational (4D-Var) DA system consisting of a three-dimensional coupled physical–BGC ocean model with physical and satellite chlorophyll a data. This DA setup has previously been presented in Mattern et al. (2017), where manual modifications to
2. Methods
a. Model and observations
The coupled physical–BGC model is based on the Regional Ocean Modeling System [ROMS; version 3.7, revision 737; Haidvogel et al. (2008)]. The model domain covers the California Current System (CCS; 30° to 48°N, coastline to 134°W) at a horizontal resolution of 0.1° × 0.1°; it is divided into 42 terrain-following layers vertically. Boundary conditions and physical forcing (wind, solar radiation, air temperature, pressure, and humidity) are based on output from COAMPS (Doyle et al. 2009). More details about the physical model are provided in Veneziani et al. (2009) and Raghukumar et al. (2015), which use a setup identical to our present application.
The BGC model is the North Pacific Ecosystem Model for Understanding Regional Oceanography (NEMURO; Kishi et al. 2007), which contains 11 BGC variables, including two phytoplankton that represent different size classes: large phytoplankton (LP) simulate diatoms, dominant in the coastal waters of the CCS, while small phytoplankton (SP) represent smaller species more prevalent offshore. LP and SP are assumed to have fixed but different nitrogen-to-chlorophyll a ratios, affecting the observation operator H for chlorophyll a and the setup of our FPI below. NEMURO parameter values are taken from and listed in Mattern and Edwards (2017), a parameter estimation study using the same model domain. The setup of the coupled physical–BGC model is identical to that in Mattern et al. (2017), where more information can be found.
In our experiments, we assimilate satellite-derived surface chlorophyll a data as the only BGC data jointly with physical data for temperature, salinity, and sea surface height (SSH) anomaly. The physical data include in situ, satellite-based, and reanalysis-based data. All data sources are listed in Table 1.
The data used for assimilation.


b. 4D-Var-based assimilation system and log transformation




In our setup, we assume normal distributions for all physical variables, which is the standard 4D-Var approach. For the BGC variables and chlorophyll a we assume lognormal distributions. The lognormal assumption for chlorophyll a better represents its characteristics in nature (Campbell 1995); for the BGC variables the lognormal assumption has shown advantages in DA scenarios (Song et al. 2012). These assumptions imply that the distribution of the errors in the physical variables is normal, whereas the distribution of the errors in the log-scaled BGC variables is normal [for details, see Fletcher and Zupanski (2006)]. This change requires modifications to the standard cost function in Eq. (1), and these changes are described in detail in Song et al. (2012, 2016a). The structure of J remains essentially the same as in the standard, purely Gaussian DA, but with
c. Error covariance diagnostics





























d. Fixed-point iteration





















In our FPI procedure, we use different observation types for adjusting
e. The data assimilation configurations
We test the FPI procedure for two DA configurations that only differ in their initial values for
DA configuration 2 (DAC2) has a much simpler setup for
The initial values for the diagonal entries of


Our two DA configurations also differ in their entries for
3. Results
a. Convergence
We first examine the FPI’s convergence characteristics using an FPI setup where each iteration of the FPI is based on 10 DA simulations, each consisting of two DA cycles. We thus refer to this DA setup as the 10 × 2 setup. The 10 simulations are evenly spaced across a 3-yr period of interest (2013–15; the start date of the first simulation is 5 January 2013, and that of the last is 6 September 2015) in order to capture and better represent interannual and intrannual differences in data availability, model misfit, (monthly) background error values, and more. In our application, it is especially important to include intrannual differences in the DA cycles, largely because of seasonal differences in the BGC model dynamics, while the length of each simulation (here, two cycles) is less crucial. Initial conditions for each of the 10 simulations are provided by a DA simulation without covariance adjustments; more than 3 million observations are assimilated in the 20 nonconsecutive DA cycles of the 10 × 2 setup (Fig. 1, center column).

The number of temperature (T; orange), SSH (yellow), salinity (S; blue), and chlorophyll a (Chl; green) observations used in our experiments; darker colors mark in situ observations (see arrows). Our FPI is based on the (center) 10 × 2 setup. The prior and posterior model-observation misfits in section 3b were determined based on (left) a longer DA simulation spanning all of 2013, while the FPI sensitivity experiments in section 3c are based on (right) the 1 × 2 setup.
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1

The number of temperature (T; orange), SSH (yellow), salinity (S; blue), and chlorophyll a (Chl; green) observations used in our experiments; darker colors mark in situ observations (see arrows). Our FPI is based on the (center) 10 × 2 setup. The prior and posterior model-observation misfits in section 3b were determined based on (left) a longer DA simulation spanning all of 2013, while the FPI sensitivity experiments in section 3c are based on (right) the 1 × 2 setup.
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
The number of temperature (T; orange), SSH (yellow), salinity (S; blue), and chlorophyll a (Chl; green) observations used in our experiments; darker colors mark in situ observations (see arrows). Our FPI is based on the (center) 10 × 2 setup. The prior and posterior model-observation misfits in section 3b were determined based on (left) a longer DA simulation spanning all of 2013, while the FPI sensitivity experiments in section 3c are based on (right) the 1 × 2 setup.
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
In our tests, the FPIs converge rapidly for both DAC1 and DAC2 (Fig. 2). In both DA configurations, relatively large changes to

Convergence of (a)–(d)
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1

Convergence of (a)–(d)
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
Convergence of (a)–(d)
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
Despite differences in configurations that cannot be eliminated by the FPI, such as the spatial structure of background errors in DAC1 or differences in entries for unobserved variables,
b. Consistency and model-observation misfit
In our assessment, we evaluate the effects of the covariance adjustments by performing a yearlong DA simulation for each of the two DA configurations before and after the covariance adjustments. Each simulation starts on 1 January 2013 and spans all of 2013, during which more than 15 million observations (Fig. 1, left column) are assimilated in 92 cycles. To compare the DA configurations, we used several metrics that distinguish between observation types and DA cycles. That is, the metrics below were computed for each cycle and observation type individually (and are, thus, dependent on Oi,c, the observations associated with observation type i and DA cycle c).







Scatterplots for the metrics introduced in section 3b for (a),(c),(e) DAC1 and (b),(d),(f) DAC2; each panel shows the results of the DA before (x axis) and after the FPI (y axis) was used to adjust
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1

Scatterplots for the metrics introduced in section 3b for (a),(c),(e) DAC1 and (b),(d),(f) DAC2; each panel shows the results of the DA before (x axis) and after the FPI (y axis) was used to adjust
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
Scatterplots for the metrics introduced in section 3b for (a),(c),(e) DAC1 and (b),(d),(f) DAC2; each panel shows the results of the DA before (x axis) and after the FPI (y axis) was used to adjust
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1

Scatterplots of J(xa)/nobs for each DA cycle of (a) DAC1 and (b) DAC2; both panels show the results of the DA before (x axis) and after the FPI (y axis) was used to adjust
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1

Scatterplots of J(xa)/nobs for each DA cycle of (a) DAC1 and (b) DAC2; both panels show the results of the DA before (x axis) and after the FPI (y axis) was used to adjust
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
Scatterplots of J(xa)/nobs for each DA cycle of (a) DAC1 and (b) DAC2; both panels show the results of the DA before (x axis) and after the FPI (y axis) was used to adjust
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1




c. FPI sensitivity
In our FPI experiments above, we used a setup where each iteration requires 10 assimilative simulations, each consisting of two cycles. Even though these simulations can be run in parallel, a considerable computational expense is associated with each iteration. To assess the sensitivity of the FPI to the DA setup and gauge the potential for drastically reducing the computational cost of the FPI, we created a second FPI setup where each iteration consists of 1, rather than 10, simulations, and in which just over 300 000 observations are assimilated (approximately 10% of the 10 × 2 setup; Fig. 1, right column). Because the single simulation still consists of two DA cycles, we refer to this setup as 1 × 2. In the following, we test the 1 × 2 setup in both DA configurations, DAC1 and DAC2, and compare the results to the 10 × 2 setup.
In terms of their convergence characteristics, the two FPI setups produce similar values, yet noticeable differences remain for some variables, especially for salinity background errors and log-chlorophyll observation errors (Fig. 5). For the metrics examined in section 2b, the 10 × 2 setup results in an improvement in consistency (lower eσ; not shown) over the 1 × 2 setup. With respect to the prior and posterior model error (not shown), the two setups exhibit systematic differences for the different observation types but no substantial improvement for either 10 × 2 or 1 × 2.

Convergence of (a)–(d)
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1

Convergence of (a)–(d)
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
Convergence of (a)–(d)
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
While the 1 × 2 setup comes at a lower computational cost, it is based on a lower number of observations. The results above indicate that the 1 × 2 setup provides less representative error covariance diagnostics and may be potentially less suitable for general use in our DA application. More evidence for this conclusion is given when examining the background and observation error multipliers derived from the 92-cycle, yearlong simulations that were used to generate the results in section 2b. That is, we use the

The effect of basing the FPI multipliers on subsets of observations that correspond to the first, second, third, and fourth days of each assimilation cycle (cycle day, see section 3d). The (a)–(d) background error and (e)–(h) observation error multipliers are shown for (left to right) each observation variable. The average multiplier for each observation type is marked by a dashed horizontal line in each panel. The multipliers are derived from the 92-cycle, yearlong simulations for DAC1 and DAC2 and using both the 10 × 2 setup (the basis for the results presented in section 3) and the computationally less expensive 1 × 2 setup, introduced in section 3c.
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1

The effect of basing the FPI multipliers on subsets of observations that correspond to the first, second, third, and fourth days of each assimilation cycle (cycle day, see section 3d). The (a)–(d) background error and (e)–(h) observation error multipliers are shown for (left to right) each observation variable. The average multiplier for each observation type is marked by a dashed horizontal line in each panel. The multipliers are derived from the 92-cycle, yearlong simulations for DAC1 and DAC2 and using both the 10 × 2 setup (the basis for the results presented in section 3) and the computationally less expensive 1 × 2 setup, introduced in section 3c.
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
The effect of basing the FPI multipliers on subsets of observations that correspond to the first, second, third, and fourth days of each assimilation cycle (cycle day, see section 3d). The (a)–(d) background error and (e)–(h) observation error multipliers are shown for (left to right) each observation variable. The average multiplier for each observation type is marked by a dashed horizontal line in each panel. The multipliers are derived from the 92-cycle, yearlong simulations for DAC1 and DAC2 and using both the 10 × 2 setup (the basis for the results presented in section 3) and the computationally less expensive 1 × 2 setup, introduced in section 3c.
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
In light of these results, we created a new 10 × 2 FPI that uses the final values obtained from the 1 × 2 FPI in its first iteration. This new FPI setup converges toward the previous 10 × 2 values in just two iterations (for both DAC1 and DAC2; see extension experiments in Fig. 5). This result suggests that in practical applications, the first iterations of an FPI can be based on a small number of DA cycles, which come at a lower computational cost. After a few iterations in which partial convergence is achieved, the FPI switches to a more representative, yet more expensive, setup that uses a higher number of DA cycles and more observations for full convergence (see section 4 for further discussion).
d. Effect of model dynamics on covariance estimates
While our FPI procedure adjusts the values in
e. Attractiveness of the fixed-point solutions
In a final set of experiments, we examine the attractiveness of the fixed points in order to investigate if there is a broader domain in which the iteration converges toward the same set of values. The complexity of the functions on which our FPIs are based [see Eqs. (5) and (6)], incorporating prior and posterior model solutions, prohibits an analytical examination. Instead, we start several FPIs in which
At the start of the experiment,

Convergence of (top)
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1

Convergence of (top)
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
Convergence of (top)
Citation: Monthly Weather Review 146, 2; 10.1175/MWR-D-17-0263.1
4. Discussion and conclusions
We presented a simple way to objectively adjust the covariance matrices of variational DA systems based on the error covariance diagnostics, which are easy to compute based on properties prescribed to the DA system or that can be obtained from one or more DA cycles. In the way presented here, the modification of covariance matrices is easy to implement and consists of a mere rescaling of variances and associated off-diagonal elements of
We tested the procedure on two DA configurations that predominantly differ in their prescribed background error structure. DAC1, the first configuration, bases its background error values on the statistics of a long model simulation without DA, while the second configuration, DAC2, uses constant surface variance values with a depth decline, severely reducing the complexity of the prescribed values. The covariance adjustments led to improvements in both configurations, and, after the adjustments, they exhibit similar characteristics in terms of the statistics we examined. While the improvements for DAC1 can, to some extent, be attributed to the downweighting of in situ observations, which showed indications of overfitting prior to tuning, the FPI does more than eliminate somewhat obvious shortcomings in our configurations. In particular, it also improves DAC2 where no distinction is made between in situ and other observations (either in terms of different observation error values or in the tuning procedure itself). Furthermore, the simpler but tuned DAC2 clearly outperforms the untuned DAC1 with respect to the posterior and prior model-observation fit, and it provides very similar results after tuning. This outcome suggests that covariance estimates with a very simple structure, together with covariance tuning, may offer an alternative way to determine values for
FPIs based on error covariance diagnostics may converge toward incorrect error values if the error structure is not correctly modeled. One issue that several studies note is difficulties in estimating background and observation error values jointly (Desroziers et al. 2005; Ménard 2016; Bowler 2017). Under certain conditions, the contributions of the two error types cannot be separated, resulting in incorrect adjustments to
A limitation of the specific approach presented here is that it makes no adjustments to the background error values for unobserved variables. This is especially an issue in complex BGC ocean models, where typically no observations exist for most of the biological variables. In our application, we set the background error values for unobserved variables to relatively low values, a pragmatic approach to limit DA adjustments to variables, which we cannot objectively assess with the error covariance diagnostics presented here. A possible alternative that we did not explore would be the use of correlations in the state vector to establish correlations for background error values and perform a spatially dependent adjustment of the background error values for unobserved variables [localized, in order to minimize the effect of spurious correlations, similar to the treatment of inflation factors in Anderson (2009)]. Relatedly, a second limitation is that the approach presented here scales the full background error field for each variable, thereby assuming that its underlying spatial structure is correct. Given enough observations to provide reliable statistics, it is easy to allow for structural changes by dividing the observations into more types: for example, by distinguishing between coastal and offshore observations or observations at different times of year, resulting in spatial or temporal structure in the background and observation error values. It is further possible to adjust off-diagonal elements by estimating length scales (see, e.g., Ménard 2016). A third limitation of our approach is that the influence of the tangent linear model dynamics is ignored in the computation of
The covariance tuning is based on an FPI, which converges quickly in about five iterations. In our experiments, we observed that the FPI converges to similar, but not identical, values if it is based on 1 instead of 10 simulations with two cycles each (see section 3c). Our results show that the 20 (nonconsecutive) cycle statistics, which are based on approximately 10 times the number of observations, are more representative and, thus, preferable to the two-cycle statistics. This suggests an FPI setup for practical applications that bases its first iterations on computationally cheap DA simulations, followed by iterations using more expensive simulations. The first iterations are used to achieve an initial convergence toward rough estimates of the error covariance values; to achieve the final convergence, the following iterations use more representative DA simulations consisting of more cycles and incorporating more observations.
Acknowledgments
We thank two anonymous reviewers for their constructive comments. This research was, in part, supported by Grant OCE-1566623 from the National Science Foundation Division of Ocean Sciences. Any opinions, findings, and conclusions or recommendations expressed here are those of the authors and do not necessarily reflect the views of the National Science Foundation. We also gratefully acknowledge the support through Grant NA16NOS0120021 of the National Oceanographic and Atmospheric Administration and the Central and Northern California Ocean Observing System.
APPENDIX A
Diagnostics for Log-Transformed Variables
While the diagnostics for the fixed-point iteration are derived in Desroziers et al. (2005) assuming Gaussian error statistics, it is not obvious that the theory applies more generally to variables with non-Gaussian distributions and error statistics. Here, we reexamine the assumptions of the quadratic, incremental form of lognormal 4D-Var (Song et al. 2016a) used in the present data assimilation system and show that within the constraints of the linearized theory, the diagnostics remain appropriate.






















































APPENDIX B
Treatment of Log-Chlorophyll






































A major effect assuming identical LP:SP is that the values of
REFERENCES
Anderson, J. L., 2009: Spatially and temporally varying adaptive covariance inflation for ensemble filters. Tellus, 61A, 72–83, https://doi.org/10.1111/j.1600-0870.2008.00361.x.
Bölöni, G., and K. Horvath, 2010: Diagnosis and tuning of background error statistics in a variational data assimilation system. Quart. J. Hung. Meteor. Serv., 114, 1–19.
Bormann, N., A. Collard, and P. Bauer, 2010: Estimates of spatial and interchannel observation-error characteristics for current sounder radiances for numerical weather prediction. II: Application to AIRS and IASI data. Quart. J. Roy. Meteor. Soc., 136, 1051–1063, https://doi.org/10.1002/qj.615.
Bormann, N., M. Bonavita, R. Dragani, R. Eresmaa, M. Matricardi, and A. McNally, 2016: Enhancing the impact of IASI observations through an updated observation-error covariance matrix. Quart. J. Roy. Meteor. Soc., 142, 1767–1780, https://doi.org/10.1002/qj.2774.
Bowler, N. E., 2017: On the diagnosis of model error statistics using weak-constraint data assimilation. Quart. J. Roy. Meteor. Soc., 143, 1916–1928, https://doi.org/10.1002/qj.3051.
Campbell, J. W., 1995: The lognormal distribution as a model for bio-optical variability in the sea. J. Geophys. Res., 100, 13 237–13 254, https://doi.org/10.1029/95JC00458.
Campbell, W. F., E. A. Satterfield, B. Ruston, and N. L. Baker, 2017: Accounting for correlated observation error in a dual-formulation 4D variational data assimilation system. Mon. Wea. Rev., 145, 1019–1032, https://doi.org/10.1175/MWR-D-16-0240.1.
Cordoba, M., S. L. Dance, G. A. Kelly, N. K. Nichols, and J. A. Waller, 2017: Diagnosing atmospheric motion vector observation errors for an operational high-resolution data assimilation system. Quart. J. Roy. Meteor. Soc., 143, 333–341, https://doi.org/10.1002/qj.2925.
Courtier, P., J.-N. Thépaut, and A. Hollingsworth, 1994: A strategy for operational implementation of 4D-Var, using an incremental approach. Quart. J. Roy. Meteor. Soc., 120, 1367–1387, https://doi.org/10.1002/qj.49712051912.
Daescu, D. N., and R. Todling, 2010: Adjoint sensitivity of the model forecast to data assimilation system error covariance parameters. Quart. J. Roy. Meteor. Soc., 136, 2000–2012, https://doi.org/10.1002/qj.693.
Dee, D. P., and Coauthors, 2011: The ERA-Interim reanalysis: Configuration and performance of the data assimilation system. Quart. J. Roy. Meteor. Soc., 137, 553–597, https://doi.org/10.1002/qj.828.
Desroziers, G., and S. Ivanov, 2001: Diagnosis and adaptive tuning of observation-error parameters in a variational assimilation. Quart. J. Roy. Meteor. Soc., 127, 1433–1452, https://doi.org/10.1002/qj.49712757417.
Desroziers, G., L. Berre, B. Chapnik, and P. Poli, 2005: Diagnosis of observation, background and analysis-error statistics in observation space. Quart. J. Roy. Meteor. Soc., 131, 3385–3396, https://doi.org/10.1256/qj.05.108.
Doyle, J. D., Q. Jiang, Y. Chao, and J. Farrara, 2009: High-resolution real-time modeling of the marine atmospheric boundary layer in support of the AOSN-II field campaign. Deep-Sea Res. II, 56, 87–99, https://doi.org/10.1016/j.dsr2.2008.08.009.
Edwards, C. A., A. M. Moore, I. Hoteit, and B. D. Cornuelle, 2015: Regional ocean data assimilation. Annu. Rev. Mar. Sci., 7, 21–42, https://doi.org/10.1146/annurev-marine-010814-015821.
Fletcher, S. J., and M. Zupanski, 2006: A data assimilation method for log-normally distributed observational errors. Quart. J. Roy. Meteor. Soc., 132, 2505–2519, https://doi.org/10.1256/qj.05.222.
Haidvogel, D. B., and Coauthors, 2008: Ocean forecasting in terrain-following coordinates: Formulation and skill assessment of the Regional Ocean Modeling System. J. Comput. Phys., 227, 3595–3624, https://doi.org/10.1016/j.jcp.2007.06.016.
Howes, K. E., A. M. Fowler, and A. S. Lawless, 2017: Accounting for model error in strong-constraint 4D-Var data assimilation. Quart. J. Roy. Meteor. Soc., 143, 1227–1240, https://doi.org/10.1002/qj.2996.
Karspeck, A. R., 2016: An ensemble approach for the estimation of observational error illustrated for a nominal 1° global ocean model. Mon. Wea. Rev., 144, 1713–1728, https://doi.org/10.1175/MWR-D-14-00336.1.
Kishi, M. J., and Coauthors, 2007: NEMURO—A lower trophic level model for the North Pacific marine ecosystem. Ecol. Modell., 202, 12–25, https://doi.org/10.1016/j.ecolmodel.2006.08.021.
Li, H., E. Kalnay, and T. Miyoshi, 2009: Simultaneous estimation of covariance inflation and observation errors within an ensemble Kalman filter. Quart. J. Roy. Meteor. Soc., 135, 523–533, https://doi.org/10.1002/qj.371.
Mattern, J. P., and C. A. Edwards, 2017: Simple parameter estimation for complex models—Testing evolutionary techniques on 3-dimensional biogeochemical ocean models. J. Mar. Syst., 165, 139–152, https://doi.org/10.1016/j.jmarsys.2016.10.012.
Mattern, J. P., H. Song, C. A. Edwards, A. M. Moore, and J. Fiechter, 2017: Data assimilation of physical and chlorophyll a observations in the California Current System using two biogeochemical models. Ocean Modell., 109, 55–71, https://doi.org/10.1016/j.ocemod.2016.12.002.
Ménard, R., 2016: Error covariance estimation methods based on analysis residuals: Theoretical foundation and convergence properties derived from simplified observation networks. Quart. J. Roy. Meteor. Soc., 142, 257–273, https://doi.org/10.1002/qj.2650.
Neveu, E., A. M. Moore, C. A. Edwards, J. Fiechter, P. Drake, W. J. Crawford, M. G. Jacox, and E. Nuss, 2016: An historical analysis of the California Current circulation using ROMS 4D-Var: System configuration and diagnostics. Ocean Modell., 99, 133–151, https://doi.org/10.1016/j.ocemod.2015.11.012.
Raghukumar, K., C. A. Edwards, N. L. Goebel, G. Broquet, M. Veneziani, A. M. Moore, and J. P. Zehr, 2015: Impact of assimilating physical oceanographic data on modeled ecosystem dynamics in the California Current System. Prog. Oceanogr., 138, 546–558, https://doi.org/10.1016/j.pocean.2015.01.004.
Song, H., C. A. Edwards, A. M. Moore, and J. Fiechter, 2012: Incremental four-dimensional variational data assimilation of positive-definite oceanic variables using a logarithm transformation. Ocean Modell., 54–55, 1–17, https://doi.org/10.1016/j.ocemod.2012.06.001.
Song, H., C. A. Edwards, A. M. Moore, and J. Fiechter, 2016a: Data assimilation in a coupled physical–biogeochemical model of the California Current System using an incremental lognormal 4-dimensional variational approach: Part 1—Model formulation and biological data assimilation twin experiments. Ocean Modell., 106, 131–145, https://doi.org/10.1016/j.ocemod.2016.04.001.
Song, H., C. A. Edwards, A. M. Moore, and J. Fiechter, 2016b: Data assimilation in a coupled physical–biogeochemical model of the California Current System using an incremental lognormal 4-dimensional variational approach: Part 3—Assimilation in a realistic context using satellite and in situ observations. Ocean Modell., 106, 159–172, https://doi.org/10.1016/j.ocemod.2016.06.005.
Stammer, D., M. Balmaseda, P. Heimbach, A. Köhl, and A. Weaver, 2016: Ocean data assimilation in support of climate applications: Status and perspectives. Annu. Rev. Mar. Sci., 8, 491–518, https://doi.org/10.1146/annurev-marine-122414-034113.
Stewart, L. M., S. L. Dance, N. K. Nichols, J. R. Eyre, and J. Cameron, 2014: Estimating interchannel observation-error correlations for IASI radiance data in the Met Office system. Quart. J. Roy. Meteor. Soc., 140, 1236–1244, https://doi.org/10.1002/qj.2211.
Talagrand, O., 1999: A posteriori evaluation and verification of analysis and assimilation algorithms. Workshop on Diagnosis of Data Assimilation Systems, Reading, United Kingdom, ECMWF, 17–28, https://www.ecmwf.int/sites/default/files/elibrary/1999/12547-posteriori-evaluation-and-verification-analysis-and-assimilation-algorithms.pdf.
Veneziani, M., C. A. Edwards, J. D. Doyle, and D. Foley, 2009: A central California coastal ocean modeling study: 1. Forward model and the influence of realistic versus climatological forcing. J. Geophys. Res., 114, C04015, https://doi.org/10.1029/2008JC004774.
Waller, J. A., S. Ballard, S. Dance, G. Kelly, N. K. Nichols, and D. Simonin, 2016a: Diagnosing horizontal and inter-channel observation error correlations for SEVIRI observations using observation-minus-background and observation-minus-analysis statistics. Remote Sens., 8, 581, https://doi.org/10.3390/rs8070581.
Waller, J. A., S. L. Dance, and N. K. Nichols, 2016b: Theoretical insight into diagnosing observation error correlations using observation-minus-background and observation-minus-analysis statistics. Quart. J. Roy. Meteor. Soc., 142, 418–431, https://doi.org/10.1002/qj.2661.
Waller, J. A., D. Simonin, S. L. Dance, N. K. Nichols, and S. P. Ballard, 2016c: Diagnosing observation error correlations for Doppler radar radial winds in the Met Office UKV model using observation-minus-background and observation-minus-analysis statistics. Mon. Wea. Rev., 144, 3533–3551, https://doi.org/10.1175/MWR-D-15-0340.1.
Yang, C., S. Masina, and A. Storto, 2017: Historical ocean reanalyses (1900–2010) using different data assimilation strategies. Quart. J. Roy. Meteor. Soc., 143, 479–493, https://doi.org/10.1002/qj.2936.