## 1. Introduction

Accurate river bathymetry characterization is crucial for river navigation and planning, allowing for safe passage of vessels and guiding channel maintenance operations. Knowledge of river bathymetry also enables hydrodynamic modeling of river flow, which is important to understanding all aspects of the riparian environment (e.g., for hydrology, ecology, and resource management). Recent advances in remote sensing techniques have spurred the development of a number of potential approaches for determining bathymetry from observational data. If the water is clear enough, it is possible to measure river depths directly using multispectral optical remote sensing (Hilldale and Raff 2008; Legleiter 2012; Legleiter et al. 2016). However, these methods lose effectiveness under high-turbidity conditions, which places limits on their utility in high-energy environments. Other proposed techniques include use of remotely sensed water surface elevation (WSE) data to solve for bathymetry. Durand et al. (2008) used an ensemble Kalman filter approach to estimate the bathymetry depths at five points along the Amazon River, using synthetically generated WSE measurements from the Surface Water and Ocean Topography (SWOT) mission. They were able to improve the bathymetry guess at the river’s outlet by 84%. Similarly, Yoon et al. (2012) used a local ensemble batch smoother to estimate bathymetry along a reach of the Ohio River to within a 52-cm average accuracy, via assimilation of eight simulated SWOT revisit cycles. More recently, Hostache et al. (2015) used synthetic drifting buoy measurements of water surface elevation and slope to estimate 1D river bathymetry with an accuracy of 36 cm, using a particle filter methodology.

Following the development of technology to remotely measure surface velocity fields, recent research has been focused on characterizing rivers using surface velocity measurements. Wilson et al. (2010) used an ensemble approach to assimilate synthetic depth-averaged velocities and to correct bathymetry for a nearshore environment using empirical orthogonal functions to construct a series of bathymetries in the study region. They later extended their methodology to riverine environments (Wilson and Özkan-Haller 2012). Landon et al. (2014) used an ensemble Kalman filter approach to estimate river depths with Lagrangian drifter–derived surface velocities using data from up to 10 drifter deployments. In the ensemble Kalman filter approach, the algorithm can require over 1000 forward model runs (5–10 realizations of 200–500 ensemble sizes), whereas in this work we required 30 adjoint model runs and 250 forward model runs. Moghimi et al. (2016) used an ensemble-based assimilation approach to retrieve bathymetry in a tidal inlet using synthetic surface velocities and wave spectra such as those that could be obtained via synthetic aperture radar (SAR) measurements.

Variational inverse modeling, which involves the use of adjoint equations to estimate unknown variables given a set of observations, is an alternative technique for approaching estimation problems that can be more efficient than ensemble-type methods. Variational inverse modeling has been successfully applied to a range of fields. For example, variational techniques have been used to estimate initial conditions for atmospheric predictions (Wang et al. 1992), to estimate large-scale ocean currents (Li et al. 1993), and to locate buried scatterers using the Maxwell equations (Rekanos and Raisanen 2003). Sanders and Bradford (2002) used variational methods to estimate river discharge from downstream depth measurements. More closely aligned with this work, Kurapov and Özkan-Haller (2013) used a variational assimilation approach to estimate nearshore bathymetry with a one-way coupled wave–circulation model. Zaron et al. (2011) developed a variational approach for estimating bathymetry from surface velocity observations in a tidally driven estuary. They used a weak-constraint variational formulation that allowed the flow estimate to deviate from exact satisfaction of the hydrodynamic model. The linearization of the flow equations, chosen for stability purposes, eliminated some terms that would naturally appear in the adjoint equations. In their results, the bathymetry estimates were relatively low resolution (~100 m) and had a root-mean-square error (RMSE) of about 3 m; the estimates exhibited significant sensitivity to both the initial guess for the bathymetry and the error characteristics of the data.

Here, we present a variational approach for combining surface velocity data with a transient two-dimensional hydrodynamic model in order to estimate bathymetry, given known discharge (upstream boundary condition) and bottom friction. The data used were collected by Areté Associates using temporal sequences of airborne infrared imagery (Dugan et al. 2014). The algorithm is similar in overall mathematical approach to that of Zaron et al. (2011), but it differs significantly in the details of its implementation. In contrast to Zaron et al., we use a strong-constraint approach, where the computed flow exactly satisfies the governing equations, and the linearization of the equations used in developing the adjoint equations is done in a mathematically straightforward fashion. We have developed a numerical solver for adjoint equations derived analytically from the nonlinear model equations, as opposed to starting with the discrete forward model that has been linearized (by lagging the nonlinear terms and transposing the discrete forward operator to get the adjoint). Certain numerical stability issues are dealt with via a specialized solver described below in section 2. Our method provides good results using minimal a priori knowledge of the river bathymetry, and we have been able to achieve estimation errors on the order of 2 m.

In the following sections, we first describe the variational framework and derive the estimation algorithm. We then describe the study area, a 95-km reach of the Columbia River near Hanford, Washington, and the observation data. While the length of the reach is challenging enough, the situation is further complicated by the time-varying river discharge, which is controlled by the Priest Rapids Dam. Application of the algorithm to the data is then described, followed by an assessment of the results and conclusions.

## 2. Variational estimation framework

*h*is defined as positive down from the same datum, and the total water depth is given as

*H = h*+

*n*, which is related to the Chézy coefficient throughThis choice of bottom friction specification is important because the Manning formulation includes the hydrodynamic depth explicitly, allowing this term to contribute directly to the bathymetry gradient. An empirical correlation allows us to relate the surface velocity

*h*(

*x*

_{i}) that minimizes the cost function

*J′*, subject to the constraint that the flow is a solution of the shallow-water equations. This constrained minimization problem can be reduced to a more tractable unconstrained minimization problem by augmenting the cost function with the product of the constraint (the shallow-water equations) and a set of Lagrangian multipliers (adjoint variables). This yieldswhere

*J*is a functional of

*h*, and

*J*is minimized when the first variation with respect to each variable vanishes, independently. The forward model equations are recovered when the first variations of the cost function with respect to

*t′ = −t*)Similarly, taking the first variation of

*J*with respect to

*h*,Integrating by parts while recognizing that

To solve for the bathymetry that minimizes the cost function, we use an iterative gradient descent algorithm. First, an initial-guess bathymetry is generated using knowledge of the land–water boundary of the river; such information can be easily obtained via aerial imagery and topographical data. We assume a fixed-depth, square-cross-sectional channel for the river. Then an initial forward simulation is run by solving Eqs. (1) and (2) using Delft3D-FLOW; the surface velocity is computed from the depth-averaged velocity using Eq. (5). Using the resultant flow field, the adjoint equations [Eqs. (16) and (17)] are then solved using the error between the predicted and observed velocity as input. At this point, the forward and adjoint solutions are used to calculate the gradient of the cost function with respect to the bathymetry using the expression in Eq. (15). The unscaled gradient is used in a line minimization via golden section search to determine the scale factor

The objective of this research is to establish the efficacy of the methodology and to exercise an algorithm on a challenging problem of interest; further algorithmic enhancements (i.e., alternatives to the steepest descent and the golden section search) should increase the efficiency of the overall performance.

## 3. Study area and data description

The Hanford Reach of the Columbia River is a large, complex river system that includes single- and multithreaded subreaches. The study area, encompassing a roughly 95-km reach between the Priest Rapids Dam and Richland, Washington, is shown in Fig. 1.

Observation data were collected using an infrared imaging system with particle image velocimetry (PIV) processing of time series data (S. Anderson, Areté Associates, 2014, personal communication). The data were collected between approximately 0300 and 0600 Pacific daylight time (PDT) 8 October 2011 The data exhibit full bank-to-bank coverage for most of the river with a nominal resolution of 20 m. The RMSE in the velocity data is on the order of 10 cm s^{−1}, with no bias, as reported by Areté Associates. The collection included two sweeps of the river: first, from the southeast to the northwest, and then from the northwest to the southeast. Each sweep required about 1.5 h of flight time. Observational data from the two passes for a lower section of the river are shown in Fig. 2, displaying the typical data characteristics (e.g., resolution and noise). It is notable that the data resolution was different for the two passes.

The forward model was set up using a curvilinear, boundary-fitted (and nominally orthogonal) computational grid with 4827 × 37 cells, resulting in an average resolution of approximately 20 m in both directions. It was constructed using two splines along the river banks extracted using aerial imagery of the river. Included in Fig. 2 are portions of the computational grid and the surveyed bathymetry. The grid displayed in Fig. 2 has been decimated by a factor of 3 for visual clarity. The forward model was set up to simulate a 30.5-h period starting at 0000 PDT 7 October 2011 and concluding shortly after the completion of the Infrared Imaging System (IRIS) data collection early on the morning of 8 October 2011. This allows for a model spinup of approximately 24 h in order to allow any initial transients to pass through the entire reach. The upstream boundary condition is a variable flow rate (between 1000 and 3000 m^{3} s^{−1}); data are provided by U.S. Geological Survey (USGS) gauge 12472800, located just below the Priest Rapids Dam. Data for the downstream water surface elevation boundary condition is provided by USGS 12514500, located just over 12 km downstream of the domain outflow boundary near the entrance of Lake Wallula, upstream of McNary Dam. Over the simulation period, the water surface elevation near the downstream computational boundary varied only on the order of 5 cm, so we used a fixed water level boundary condition set to 104.3-m North American Vertical Datum of 1988 (NAVD88) at the downstream boundary. The forward model is run with a time step of 6 s and a Manning friction coefficient of 0.03 m^{1/3} s^{−1}. As this effort is focused on the estimation of bathymetry, it is assumed that the bottom friction coefficient is known (and uniform). The bottom friction coefficient was chosen based on a model comparison between estimated and observed surface velocities when using the true bathymetry. The influence of spatially varying bottom friction is left to future work.

The Hanford Reach of the Columbia River is extremely dynamic. A plot of the discharge at the upstream dam versus time during the data collection window and the discharge versus distance over the reach at a fixed time are shown in Fig. 3. The inflow discharge shows significant variation over time; this is attenuated somewhat over the length of the river. During the data collection window (27–29 h), the discharge also shows significant variation over the length of the river, as seen in Fig. 3. The adjoint solver (which runs backward in time) can be run for only a portion of the forward simulation, since the data sources are confined to a smaller time window; we chose a 9.5-h period (30.5–21 h) based on the river velocities and the reach length.

The ground truth bathymetry was collected in 1998 by the USGS Biological Resource Division’s Columbia River Research Laboratory (Tiffan et al. 2002; provided to us by Dr. K. T. Holland of the Naval Research Laboratory), using the Scanning Hydrographic Operational Airborne Lidar Survey (SHOALS) lidar system (Irish et al. 2000). A sample of the bathymetry is shown in Fig. 2. Computing the flow using the true bathymetry gave satisfactory results when comparing the modeled surface velocity to the IRIS data, as shown in Fig. 4. Because there are thousands of data points, the data are binned to clarify the comparison. The color scale indicates the number of data points in the data bins, and the error bars indicate plus/minus one standard deviation of the computed velocities in the bin, plotted against the observed velocity.

The initial bathymetry field for the assimilation was determined by using a river mask defined by an aerial image along with regional topographical information. In the wetted regions not covered by the data, the depth is set to 2 m below the local bank topography; in the data region, the depth is set farther below the local bank topography, yielding bathymetry that is essentially a flat bottom channel sloping downstream, tracking the local river banks. The additional channel offset depth is chosen to be generally too deep, such that the algorithm should generally focus on making certain regions shallower than the initial guess. This is intentional because for intermediate depths, if the velocity is too high, continuity dictates that the depth should be deeper, while bottom friction tends to adjust the depth shallower. This behavior has been observed in previous testing, as noted by Almeida (2012). For this particular study, two different initial channel depths were used in testing: one with 8-m depth in the data region and one with 5-m depth in the data region. The final estimated bathymetry fields of the two cases were very similar in most regions of the river, but there were a few short subreaches where the shallower initial guess resulted in a final estimate that was too shallow. Below, we focus only on the results from the deeper initial-guess assimilation.

## 4. Results and discussion

Using the algorithm described above, the bathymetry is adjusted, starting from an initial guess until a best fit to the observations is obtained. Overall, the algorithm required 30 line minimizations to reach cost function convergence. Each line minimization required 5–10 forward model runs in the golden section search to find the correct scale factor for the bathymetry gradient. The forward model was run on 32 processors, requiring approximately 10 min for each run, and the adjoint model runs required approximately 30 min on 32 processors. The initial cost function was 2660 m^{2} s^{−2}, which was reduced by 77% to 601 m^{2} s^{−2} at convergence. However, the convergence criterion was strict: after 10 line minimizations, the cost function was already reduced by 70%.

The velocities computed using the initial-guess bathymetry and for the final converged bathymetry are shown in Fig. 5, using the same binning procedure described above. The improvement in the agreement between the observed and estimated velocities is clear, although the error is slightly larger than that obtained using the ground truth bathymetry. The remaining velocity errors (predominantly in the low-velocity regions, generally where the river is deepest) indicate that there is still room for improvement.

To fully characterize the overall algorithm performance, we consider total water depth and bathymetry as metrics for assessing algorithm accuracy. The water depths computed using the ground truth bathymetry are used for this comparison, and the overall RMSE in depth, over the entire 95-km reach, is 1.96 m. Comparing the estimated bathymetry and water depths to the ground truth can be accomplished in multiple ways. Figure 6 shows a comparison of the cross-stream-averaged bathymetry as a function of downstream distance. It is clear that both large- and small-scale cross-stream-averaged features are captured by the estimation algorithm; however, for the first 30 km downstream, there is a vertical offset of about 8 m.

A similar comparison is shown in Fig. 7, which compares the time- and width-averaged total water depths at the data collection locations and times as a function of downstream distance. It is notable that the deepest parts of the river are underestimated. This is due to the reduced impact of the bathymetry on the surface velocity in deep water; that is, in the deepest parts of the river, the surface velocity is relatively insensitive to the bottom variations.

To summarize the overall algorithm performance, Fig. 8 shows comparisons of estimated and true water depth and bathymetry at the velocity data times and locations, using the same approach as described above in the velocity comparisons. The total water depth comparison indicates that the algorithm performs extremely well for depths that are between 2 and 8 m (the mean and RMSEs are −0.04 and 1.56 m, respectively); there are very few data points with depths less than 2 m, and depths greater than 12 m are also rare. In Fig. 8, bins with fewer than 10 data points are omitted (the 20 largest bins have substantially more than 100 points each). The bathymetry comparison is generally an analog to the 1D plots discussed earlier, in that the river drops in elevation so much over the 95 km that the binning results in a downstream comparison. The overall bias in bathymetry (−0.86 m) is somewhat less than expected, considering the large bias observed upstream in Fig. 6. This is due to the reduced number of data observations in the upstream portion of the river and the width of the river (it is much narrower in the upstream reach than in the downstream reach).

One notable result is the apparent bias in the bathymetry for the upstream portion of the river, which is not present in the total water depth comparison (Figs. 6 and 7). In considering the mechanism for such a bias, we find that there are large positive and negative velocity errors in the region where the shift is apparent, and there is also a noticeable gap in the data in this region, as shown in Fig. 9. Because of this gap and significant (downstream) velocity gradient, both the initialization (first-guess bathymetry) and progression of the algorithm are hampered in this region. Also, as mentioned above, there are fewer observational points in the upstream 30 km of the river, which results in a smaller contribution to the cost function, so downstream adjustments to the bathymetry affect the overall velocity comparisons disproportionately.

To further examine the algorithm’s accuracy, Fig. 10 shows a synoptic view of the depth comparison for the entire river. The figure shows the initial-guess, the final estimated, and the true depths, along with depth error. The depths are plotted in a downstream and cross-stream coordinate system to provide a more compact view of the entire river.

These plots emphasize the overall accuracy of the algorithm while noting the largest error is in the region where there is a gap in the data coverage, as shown in Figs. 11 and 12, which depict the velocity comparisons in the same coordinate system for the two data collection passes. The absolute magnitude of the errors in this data-starved region is visually dampened by the number of data points; highlighted in Figs. 11 and 12 is the density of observations; there were many more observations in the (low velocity) downstream portion of the river, and the first-pass data were provided at a higher resolution than the second-pass data.

## 5. Summary and conclusions

Variational inverse modeling offers a data-driven, physics-based methodology for extracting river bathymetry from surface velocity observations. Applying the methodology and algorithm described in this paper, the river depth for a 95-km reach of the Columbia River in Washington State was reconstructed to within 1.96 m RMSE. The error in the estimated velocity field versus the observed velocity field was reduced from 59 to 28 cm s^{−1}. While the algorithm has been exercised on synthetic data prior to this work in Almeida (2012), this is the first attempt at applying the methodology to real observational data when there has been reliable, contemporaneous bathymetric ground truth data available to assess the efficacy of the approach. The results suggest that the methodology and approach are sound.

The potential impacts of this effort are diverse, with applications to navigation, channel dredging, and ecological and habitat studies, for example. Further, the methodology can be expanded upon by investigating other river characterization problems, such as discharge and bottom friction estimation. Incorporating other sources (and types) of data into the modeling framework will allow further refinement of the technique and will expand on the potential uses of such a capability.

The authors would like to acknowledge the support of the Office of Naval Research, in particular Tom Drake and Reggie Beach. Funding for this effort was provided under Office of Naval Research Contract N00014-11-C-0317. The ground truth bathymetry was provided to SRI by Todd Holland at the Naval Research Laboratory. The surface velocity observational data were provided by Steve Anderson at Areté Associates.

## REFERENCES

Almeida, T. G., 2012: Estimation of river characteristics from remote sensing data. Ph.D. dissertation, Michigan State University, 91 pp.

Dugan, J. P., S. P. Anderson, C. C. Piotrowski, and S. B. Zuckerman, 2014: Airborne infrared remote sensing of riverine currents.

,*IEEE Trans. Geosci. Remote Sens.***52**, 3895–3907, https://doi.org/10.1109/TGRS.2013.2277815.Durand, M., K. M. Andreadis, D. E. Alsdorf, D. P. Lettenmaier, D. Moller, and M. Wilson, 2008: Estimation of bathymetric depth and slope from data assimilation of swath altimetry into a hydrodynamic model.

,*Geophys. Res. Lett.***35**, L20401, https://doi.org/10.1029/2008GL034150.Hilldale, R. C., and D. Raff, 2008: Assessing the ability of airborne LiDAR to map river bathymetry.

,*Earth Surf. Processes Landforms***33**, 773–783, https://doi.org/10.1002/esp.1575.Hostache, R., P. Matgen, L. Giustarini, F. N. Teferle, C. Tailliez, J.-F. Iffly, and G. Corato, 2015: A drifting GPS buoy for retrieving effective riverbed bathymetry.

,*J. Hydrol.***520**, 397–406, https://doi.org/10.1016/j.jhydrol.2014.11.018.Hulsing, H., W. Smith, and E. D. Cobb, 1966: Velocity-head coefficients in open channels. Dept. of the Interior Geological Survey Water-Supply Paper 1869-C, 45 pp.

Irish, J. L., J. K. McClung, and W. J. Lillycrop, 2000: Airborne lidar bathymetry: The SHOALS system.

,*Bull. Int. Navig. Assoc.***103**, 43–54.Kurapov, A. L., and H. T. Özkan-Haller, 2013: Bathymetry correction using an adjoint component of a coupled nearshore wave-circulation model: Tests with synthetic velocity data.

,*J. Geophys. Res. Oceans***118**, 4673–4688, https://doi.org/10.1002/jgrc.20306.Landon, K. C., G. W. Wilson, H. T. Özkan-Haller, and J. H. MacMahan, 2014: Bathymetry estimation using drifter-based velocity measurements on the Kootenai River, Idaho.

,*J. Atmos. Oceanic Technol.***31**, 503–514, https://doi.org/10.1175/JTECH-D-13-00123.1.Legleiter, C. J., 2012: Remote measurement of river morphology via fusion of LiDAR topography and spectrally based bathymetry.

,*Earth Surf. Processes Landforms***37**, 499–518, https://doi.org/10.1002/esp.2262.Legleiter, C. J., B. T. Overstreet, C. L. Glennie, Z. Pan, J. C. Fernandez-Diaz, and A. Singhania, 2016: Evaluating the capabilities of the CASI hyperspectral imaging system and Aquarius bathymetric LiDAR for measuring channel morphology in two distinct river environments.

,*Earth Surf. Processes Landforms***41**, 344–363, https://doi.org/10.1002/esp.3794.Li, Y., I. M. Navon, P. Courtier, and P. Gauthier, 1993: Variational data assimilation with a semi-Lagrangian semi-implicit global shallow-water equation model and its adjoint.

,*Mon. Wea. Rev.***121**, 1759–1769, https://doi.org/10.1175/1520-0493(1993)121<1759:VDAWAS>2.0.CO;2.Moghimi, S., H. T. Özkan-Haller, G. W. Wilson, and A. Kurapov, 2016: Data assimilation for bathymetry estimation at a tidal inlet.

,*J. Atmos. Oceanic Technol.***33**, 2145–2163, https://doi.org/10.1175/JTECH-D-14-00188.1.Patankar, S. V., 1981: A calculation procedure for two-dimensional elliptic situations.

,*Numer. Heat Transf.***4**, 409–425, https://doi.org/10.1080/01495728108961801.Rantz, S. E., and Coauthors, 1982a: Measurement and computation of streamflow: Vol. 1. Measurement of stage and discharge. Dept. of the Interior Geological Survey Water-Supply Paper 2175, 313 pp.

Rantz, S. E., and Coauthors, 1982b: Measurement and computation of streamflow: Vol. 2. Computation of discharge. Dept. of the Interior Geological Survey Water-Supply Paper 2175, 373 pp.

Rekanos, I. T., and A. Raisanen, 2003: Microwave imaging in the time domain of buried multiple scatterers by using an FDTD-based optimization technique.

,*IEEE Trans. Magn.***39**, 1381–1384, https://doi.org/10.1109/TMAG.2003.810526.Sanders, B. F., and S. F. Bradford, 2002: High‐resolution, monotone solution of the adjoint shallow‐water equations.

,*Int. J. Numer. Methods Fluids***38**, 139–161, https://doi.org/10.1002/fld.206.Tiffan, K. F., R. D. Garland, and D. W. Rondorf, 2002: Quantifying flow-dependent changes in subyearling fall Chinook salmon rearing habitat using two-dimensional spatially explicit modeling.

,*North Amer. J. Fish. Manage.***22**, 713–726, https://doi.org/10.1577/1548-8675(2002)022<0713:QFDCIS>2.0.CO;2.Wang, Z., I. Navon, F. Le Dimet, and X. Zou, 1992: The second order adjoint analysis: Theory and applications.

,*Meteor. Atmos. Phys.***50**, 3–20, https://doi.org/10.1007/BF01025501.Wilson, G. W., and H. T. Özkan-Haller, 2012: Ensemble-based data assimilation for estimation of river depths.

,*J. Atmos. Oceanic Technol.***29**, 1558–1568, https://doi.org/10.1175/JTECH-D-12-00014.1.Wilson, G. W., H. T. Özkan-Haller, and R. A. Holman, 2010: Data assimilation and bathymetric inversion in a two-dimensional horizontal surf zone model.

,*J. Geophys. Res.***115**, C12057, https://doi.org/10.1029/2010JC006286.Yoon, Y., M. Durand, C. J. Merry, E. A. Clark, K. M. Andreadis, and D. E. Alsdorf, 2012: Estimating river bathymetry from data assimilation of synthetic SWOT measurements.

,*J. Hydrol.***464–465**, 363–375, https://doi.org/10.1016/j.jhydrol.2012.07.028.Zaron, E. D., M.-A. Pradal, P. D. Miller, A. F. Blumberg, N. Georgas, W. Li, and J. Muccino Cornuelle, 2011: Bottom topography mapping via nonlinear data assimilation.

,*J. Atmos. Oceanic Technol.***28**, 1606–1623, https://doi.org/10.1175/JTECH-D-11-00070.1.