# Search Results

## You are looking at 1 - 10 of 18 items for

- Author or Editor: Robert N. Miller x

- Refine by Access: All Content x

^{ }

## Abstract

As ocean models improve, assimilation of data with the help of models becomes increasingly important. The Kalman filter provides a method for assimilation of data that are arbitrarily distributed in time and space and have differing error characteristics. Its desirable features are optimality in the least squares sense for a broad class of systems, and recursiveness, i.e., the algorithm depends only upon statistical quantities that are updated with each successive observation. The observations themselves may then be discarded, and no actual history of the system under study need be retained.

The full Kalman filter, however, presents considerable demands on computing resources. There are few examples with solutions in closed from, relatively little is known about the case in which the system under study is governed by partial rather than ordinary differential equation, and the effects of nonlinearity are still incompletely understood.

In this study a first step is undertaken toward the formulation of a suitably simplified, computationally efficient form of the Kalman filter for estimation and prediction of ocean eddy fields. In this step, the full Kalman filter is applied to simplified systems designed to capture some of the properties of open ocean models, and computational results are analyzed and interpreted in terms of realistic models are datasets.

## Abstract

As ocean models improve, assimilation of data with the help of models becomes increasingly important. The Kalman filter provides a method for assimilation of data that are arbitrarily distributed in time and space and have differing error characteristics. Its desirable features are optimality in the least squares sense for a broad class of systems, and recursiveness, i.e., the algorithm depends only upon statistical quantities that are updated with each successive observation. The observations themselves may then be discarded, and no actual history of the system under study need be retained.

The full Kalman filter, however, presents considerable demands on computing resources. There are few examples with solutions in closed from, relatively little is known about the case in which the system under study is governed by partial rather than ordinary differential equation, and the effects of nonlinearity are still incompletely understood.

In this study a first step is undertaken toward the formulation of a suitably simplified, computationally efficient form of the Kalman filter for estimation and prediction of ocean eddy fields. In this step, the full Kalman filter is applied to simplified systems designed to capture some of the properties of open ocean models, and computational results are analyzed and interpreted in terms of realistic models are datasets.

^{ }

^{ }

## Abstract

In this work the performance of ensembles generated by commonly used methods in a nonlinear system with multiple attractors is examined. The model used here is a spectral truncation of a barotropic quasigeostrophic channel model. The system studied here has 44 state variables, great enough to exhibit the problems associated with high state dimension, but small enough so that experiments with very large ensembles are practical, and relevant probability density functions (PDFs) can be evaluated explicitly. The attracting sets include two stable limit cycles.

To begin, the basins of attraction of two known stable limit cycles are characterized. Large ensembles are then used to calculate the evolution of initially Gaussian PDFs with a range of initial covariances. If the initial covariances are small, the PDF remains essentially unimodal, and the probability that a point drawn from the initial PDF lies in a different basin of attraction from the mean of that PDF is small. If the initial covariances are so large that there is significant probability that a given point in the initial ensemble does not lie in the same basin of attraction as the mean, the initial Gaussian PDF will evolve into a bimodal PDF. In this case, graphical representation of the PDF appears to split into two distinct regions of relatively high probability.

The ability of smaller ensembles drawn from spaces spanned by singular vectors and by bred vectors to capture this splitting behavior is then investigated, with the objective here being to see how well they capture multimodality in a highly nonlinear system. The performance of similarly small random ensembles drawn without dynamical constraints is also evaluated.

In this application, small ensembles chosen from subspaces of singular vectors performed well, their weakest performance being for an ensemble with relatively large initial variance for which the Gaussian character of the initial PDF remained intact. This was the best case for the bred vectors because of their tendency to align tangent to the attractor, but the bred vectors were at a disadvantage in detection of the tendency of an initially Gaussian PDF to evolve into a bimodal one, as were the unconstrained ensembles.

## Abstract

In this work the performance of ensembles generated by commonly used methods in a nonlinear system with multiple attractors is examined. The model used here is a spectral truncation of a barotropic quasigeostrophic channel model. The system studied here has 44 state variables, great enough to exhibit the problems associated with high state dimension, but small enough so that experiments with very large ensembles are practical, and relevant probability density functions (PDFs) can be evaluated explicitly. The attracting sets include two stable limit cycles.

To begin, the basins of attraction of two known stable limit cycles are characterized. Large ensembles are then used to calculate the evolution of initially Gaussian PDFs with a range of initial covariances. If the initial covariances are small, the PDF remains essentially unimodal, and the probability that a point drawn from the initial PDF lies in a different basin of attraction from the mean of that PDF is small. If the initial covariances are so large that there is significant probability that a given point in the initial ensemble does not lie in the same basin of attraction as the mean, the initial Gaussian PDF will evolve into a bimodal PDF. In this case, graphical representation of the PDF appears to split into two distinct regions of relatively high probability.

The ability of smaller ensembles drawn from spaces spanned by singular vectors and by bred vectors to capture this splitting behavior is then investigated, with the objective here being to see how well they capture multimodality in a highly nonlinear system. The performance of similarly small random ensembles drawn without dynamical constraints is also evaluated.

In this application, small ensembles chosen from subspaces of singular vectors performed well, their weakest performance being for an ensemble with relatively large initial variance for which the Gaussian character of the initial PDF remained intact. This was the best case for the bred vectors because of their tendency to align tangent to the attractor, but the bred vectors were at a disadvantage in detection of the tendency of an initially Gaussian PDF to evolve into a bimodal one, as were the unconstrained ensembles.

^{ }

^{ }

## Abstract

An analysis of variational data assimilation schemes for linear dynamical forecast models shows that the penalty functional must include an explicit contribution from the initial conditions in order to ensure a unique, low-noise forecast. The noise level is related to the effective number of data being assimilated.

## Abstract

An analysis of variational data assimilation schemes for linear dynamical forecast models shows that the penalty functional must include an explicit contribution from the initial conditions in order to ensure a unique, low-noise forecast. The noise level is related to the effective number of data being assimilated.

^{ }

^{ }

## Abstract

The Kalman filter is implemented and tested for a simple model of sea level anomalies in the tropical Pacific, using tide gauge data from six selected island stations to update the model. The Kalman filter requires detailed statistical assumptions about the errors in the model and the data. In this study, it is assumed that the model errors are dominated by the errors in the wind stress analysis. The error model is a simple covariance function with parameters fit from the observed differences between the tide gauge data and the model output. The fitted parameters are consistent with independent estimates of the errors in the wind stress analysis. The calibrated error model is used in a Kalman filtering scheme to generate monthly sea level height anomaly maps for the tropical Pacific. The filtered maps, i.e., those which result from data assimilation, exhibit fine structure that is absent from the unfiltered model output, even in regions removed from the data insertion points. Error estimates, an important byproduct of the scheme, suggest that the filter reduces the error in the equatorial wave guide by about 1 cm. The few independent verification points available are consistent with this estimate. Given that only six data points participate in the data assimilation, the results are encouraging, but it is obvious that model errors cannot be substantially reduced without more data.

## Abstract

The Kalman filter is implemented and tested for a simple model of sea level anomalies in the tropical Pacific, using tide gauge data from six selected island stations to update the model. The Kalman filter requires detailed statistical assumptions about the errors in the model and the data. In this study, it is assumed that the model errors are dominated by the errors in the wind stress analysis. The error model is a simple covariance function with parameters fit from the observed differences between the tide gauge data and the model output. The fitted parameters are consistent with independent estimates of the errors in the wind stress analysis. The calibrated error model is used in a Kalman filtering scheme to generate monthly sea level height anomaly maps for the tropical Pacific. The filtered maps, i.e., those which result from data assimilation, exhibit fine structure that is absent from the unfiltered model output, even in regions removed from the data insertion points. Error estimates, an important byproduct of the scheme, suggest that the filter reduces the error in the equatorial wave guide by about 1 cm. The few independent verification points available are consistent with this estimate. Given that only six data points participate in the data assimilation, the results are encouraging, but it is obvious that model errors cannot be substantially reduced without more data.

^{ }

^{ }

^{ }

## Abstract

Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence.

Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided.

Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method—based on an empirical statistical model derived from a Monte Carlo simulation-is formulated, and shown to work very well.

Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.

## Abstract

Advanced data assimilation methods are applied to simple but highly nonlinear problems. The dynamical systems studied here are the stochastically forced double well and the Lorenz model. In both systems, linear approximation of the dynamics about the critical points near which regime transitions occur is not always sufficient to track their occurrence or nonoccurrence.

Straightforward application of the extended Kalman filter yields mixed results. The ability of the extended Kalman filter to track transitions of the double-well system from one stable critical point to the other depends on the frequency and accuracy of the observations relative to the mean-square amplitude of the stochastic forcing. The ability of the filter to track the chaotic trajectories of the Lorenz model is limited to short times, as is the ability of strong-constraint variational methods. Examples are given to illustrate the difficulties involved, and qualitative explanations for these difficulties are provided.

Three generalizations of the extended Kalman filter are described. The first is based on inspection of the innovation sequence, that is, the successive differences between observations and forecasts; it works very well for the double-well problem. The second, an extension to fourth-order moments, yields excellent results for the Lorenz model but will be unwieldy when applied to models with high-dimensional state spaces. A third, more practical method—based on an empirical statistical model derived from a Monte Carlo simulation-is formulated, and shown to work very well.

Weak-constraint methods can be made to perform satisfactorily in the context of these simple models, but such methods do not seem to generalize easily to practical models of the atmosphere and ocean. In particular, it is shown that the equations derived in the weak variational formulation are difficult to solve conveniently for large systems.

^{ }

^{ }

^{ }

## Abstract

Practical hydrostatic ocean models are often restricted to statically stable configurations by the use of a convective adjustment. A common way to do this is to assign an infinite boat conductivity to the water at a given level if the water column should become statically unstable. This is implemented in the form of a switch. When a statically unstable configuration is detected, it is immediately replaced with a statically stable one in which heat is conserved. In this approach, the model is no longer governed by a smooth set of equations, and usual techniques of variational data assimilation must be modified.

In this note, a simple one-dimensional diffusive model is presented. Despite its simplicity, this model captures the essential behavior of the convective adjustment scheme in a widely used ocean general circulation model. Since this simple model can be derived from the more complex general circulation model, it then follows that many of the properties of the constrained system can be observed in this very simple scalar ordinary differential equation with a constraint on the solution.

Techniques from the theory of optimal control are used to find solutions of a simple formulation of the variational data assimilation problem in this simple case. The optimal solution involves the solution of a nonlinear problem, even when the unconstrained dynamics are linear. In cases with discontinuous dynamics, one cannot define the adjoint of the linearized system in a straightforward manner. The very simplest variational formulation is shown to have nonunique stationary points and undesirable physical consequences. Modifications that lead to better behaved calculations and more meaningful solutions are presented.

Whereas it is likely that the underlying principles from control theory are applicable to practical ocean models, the technique used to solve the simple problem may be applicable only to steady problems. Derivation of suitable techniques for initial value problems will involve a major research effort.

## Abstract

Practical hydrostatic ocean models are often restricted to statically stable configurations by the use of a convective adjustment. A common way to do this is to assign an infinite boat conductivity to the water at a given level if the water column should become statically unstable. This is implemented in the form of a switch. When a statically unstable configuration is detected, it is immediately replaced with a statically stable one in which heat is conserved. In this approach, the model is no longer governed by a smooth set of equations, and usual techniques of variational data assimilation must be modified.

In this note, a simple one-dimensional diffusive model is presented. Despite its simplicity, this model captures the essential behavior of the convective adjustment scheme in a widely used ocean general circulation model. Since this simple model can be derived from the more complex general circulation model, it then follows that many of the properties of the constrained system can be observed in this very simple scalar ordinary differential equation with a constraint on the solution.

Techniques from the theory of optimal control are used to find solutions of a simple formulation of the variational data assimilation problem in this simple case. The optimal solution involves the solution of a nonlinear problem, even when the unconstrained dynamics are linear. In cases with discontinuous dynamics, one cannot define the adjoint of the linearized system in a straightforward manner. The very simplest variational formulation is shown to have nonunique stationary points and undesirable physical consequences. Modifications that lead to better behaved calculations and more meaningful solutions are presented.

Whereas it is likely that the underlying principles from control theory are applicable to practical ocean models, the technique used to solve the simple problem may be applicable only to steady problems. Derivation of suitable techniques for initial value problems will involve a major research effort.

^{ }

^{ }

^{ }

## Abstract

The Geosat altimeter sea level observations in the tropical Pacific Ocean are used to evaluate the Performance of a linear wind-driven equatorial wave model. The question posed is the extent to which such a model can describe the observed sea level variations. The Kalman filter and optimal smoother are used to obtain a solution that is an optimal fit to the observation in a weighted least-squares sense. The total mean variance of the Geosat sea level observation is 98.1 cm^{2}, of which 36.6 cm^{2} is due to measurement errors, leaving 61.5 cm^{2} for the oceanographic signal to be explained. The model is found to account for about 68% of this signal Variance and the remainder is ascribed to the effects of physical mechanisms missing from the model. This result suggests that the Geosat data contains sufficient information for testing yet more sophisticated models. Utility of an approximate filter and smoother based on the asymptotic time limit of the estimation error covariance is also examined and compared with the estimates of the full time-evolving filter. The results are found to be statistically indistinguishable from each other, but the computational requirements are more than an order of magnitude less for the approximate filter/smoother. Corrections to the wind field that drives the model are also obtained by the smoother, but they are found only to be marginally improved when compared with in situ wind measurements. The substantial errors in the Geosat data and the simplicity of the present model prevents a reliable wind estimate from being made.

## Abstract

The Geosat altimeter sea level observations in the tropical Pacific Ocean are used to evaluate the Performance of a linear wind-driven equatorial wave model. The question posed is the extent to which such a model can describe the observed sea level variations. The Kalman filter and optimal smoother are used to obtain a solution that is an optimal fit to the observation in a weighted least-squares sense. The total mean variance of the Geosat sea level observation is 98.1 cm^{2}, of which 36.6 cm^{2} is due to measurement errors, leaving 61.5 cm^{2} for the oceanographic signal to be explained. The model is found to account for about 68% of this signal Variance and the remainder is ascribed to the effects of physical mechanisms missing from the model. This result suggests that the Geosat data contains sufficient information for testing yet more sophisticated models. Utility of an approximate filter and smoother based on the asymptotic time limit of the estimation error covariance is also examined and compared with the estimates of the full time-evolving filter. The results are found to be statistically indistinguishable from each other, but the computational requirements are more than an order of magnitude less for the approximate filter/smoother. Corrections to the wind field that drives the model are also obtained by the smoother, but they are found only to be marginally improved when compared with in situ wind measurements. The substantial errors in the Geosat data and the simplicity of the present model prevents a reliable wind estimate from being made.

^{ }

^{ }

^{ }

## Abstract

The latitudinal structure of annual equatorial Rossby waves in the tropical Pacific Ocean based on sea surface height (SSH) and thermocline depth observations is equatorially asymmetric, which differs from the structure of the linear waves of classical theory that are often presumed to dominate the variability. The nature of this asymmetry is such that the northern SSH maximum (along 5.5°N) is roughly 2 times that of the southern maximum (along 6.5°S). In addition, the observed westward phase speeds are roughly 0.5 times the predicted speed of 90 cm s^{−1} and are also asymmetric with the northern phase speeds, about 25% faster than the southern phase speeds. One hypothesized mechanism for the observed annual equatorial Rossby wave amplitude asymmetry is modification of the meridional structure by the asymmetric meridional shears associated with the equatorial current system. Another hypothesis is the asymmetry of the annually varying wind forcing, which is stronger north of the equator. A reduced-gravity, nonlinear, *β*-plane model with rectangular basin geometry forced by idealized Quick Scatterometer (QuikSCAT) wind stress is used to test these two mechanisms. The model with an asymmetric background mean current system perturbed with symmetric annually varying winds consistently produces asymmetric Rossby waves with a northern maximum (4.7°N) that is 1.6 times the southern maximum (5.2°S) and westward phase speeds of approximately 53 ± 13 cm s^{−1} along both latitudes. Simulations with a symmetric background mean current system perturbed by asymmetric annually varying winds fail to produce the observed Rossby wave structure unless the perturbation winds become strong enough for nonlinear interactions to produce asymmetry in the background mean current system. The observed latitudinal asymmetry of the phase speed is found to be critically dependent on the inclusion of realistic coastline boundaries.

## Abstract

The latitudinal structure of annual equatorial Rossby waves in the tropical Pacific Ocean based on sea surface height (SSH) and thermocline depth observations is equatorially asymmetric, which differs from the structure of the linear waves of classical theory that are often presumed to dominate the variability. The nature of this asymmetry is such that the northern SSH maximum (along 5.5°N) is roughly 2 times that of the southern maximum (along 6.5°S). In addition, the observed westward phase speeds are roughly 0.5 times the predicted speed of 90 cm s^{−1} and are also asymmetric with the northern phase speeds, about 25% faster than the southern phase speeds. One hypothesized mechanism for the observed annual equatorial Rossby wave amplitude asymmetry is modification of the meridional structure by the asymmetric meridional shears associated with the equatorial current system. Another hypothesis is the asymmetry of the annually varying wind forcing, which is stronger north of the equator. A reduced-gravity, nonlinear, *β*-plane model with rectangular basin geometry forced by idealized Quick Scatterometer (QuikSCAT) wind stress is used to test these two mechanisms. The model with an asymmetric background mean current system perturbed with symmetric annually varying winds consistently produces asymmetric Rossby waves with a northern maximum (4.7°N) that is 1.6 times the southern maximum (5.2°S) and westward phase speeds of approximately 53 ± 13 cm s^{−1} along both latitudes. Simulations with a symmetric background mean current system perturbed by asymmetric annually varying winds fail to produce the observed Rossby wave structure unless the perturbation winds become strong enough for nonlinear interactions to produce asymmetry in the background mean current system. The observed latitudinal asymmetry of the phase speed is found to be critically dependent on the inclusion of realistic coastline boundaries.

^{ }

^{ }

^{ }

^{ }

## Abstract

Kaiman filter theory and autoregressive time series are used to map sea level height anomalies in the tropical Pacific. Our Kalman filters are implemented with a linear state space model consisting of evolution equations for the amplitudes of baroclinic Kelvin and Rossby waves and data from the Pacific tide gauge network. In this study, three versions of the Kalman filter are evaluated through examination of the innovation sequences, that is, the time series of differences between the observations and the model predictions before updating. In a properly tuned Kalman filter, one expects the innovation sequence to be white (uncorrelated, with zero mean). A white innovation sequence can thus be taken as an indication that there is no further information to be extracted from the sequence of observations. This is the basis for the frequent use of whiteness, that is, lack of autocorrelation, in the innovation sequence as a performance diagnostic for the Kalman filter.

Our long-wave model embodies the conceptual basis of current understanding of the large-scale behavior of the tropical ocean. When the Kalman filter was used to assimilate sea level anomaly data, we found the resulting innovation sequence to be temporally correlated, that is, nonwhite and well fitted by an autoregressive process with a lag of one month. A simple modification of the way in which sea level height anomaly is represented in terms of the state vector for comparison to observation results in a slight reduction in the temporal correlation of the innovation sequences and closer fits of the model to the observations, but significant autoregressive structure remains in the innovation sequence. This autoregressive structure represents either a deficiency in the model or some source of inconsistency in the data.

When an explicit first-order autoregressive model of the innovation sequence is incorporated into the filter, the new innovation sequence is white. In an experiment with the modified filter in which some data were held back from the assimilation process, the sequences of residuals at the withheld stations were also white. To our knowledge, this has not been achieved before in an ocean data assimilation scheme with real data. Implications of our results for improved estimates of model error statistics and evaluation of adequacy of models are discussed in detail.

## Abstract

Kaiman filter theory and autoregressive time series are used to map sea level height anomalies in the tropical Pacific. Our Kalman filters are implemented with a linear state space model consisting of evolution equations for the amplitudes of baroclinic Kelvin and Rossby waves and data from the Pacific tide gauge network. In this study, three versions of the Kalman filter are evaluated through examination of the innovation sequences, that is, the time series of differences between the observations and the model predictions before updating. In a properly tuned Kalman filter, one expects the innovation sequence to be white (uncorrelated, with zero mean). A white innovation sequence can thus be taken as an indication that there is no further information to be extracted from the sequence of observations. This is the basis for the frequent use of whiteness, that is, lack of autocorrelation, in the innovation sequence as a performance diagnostic for the Kalman filter.

Our long-wave model embodies the conceptual basis of current understanding of the large-scale behavior of the tropical ocean. When the Kalman filter was used to assimilate sea level anomaly data, we found the resulting innovation sequence to be temporally correlated, that is, nonwhite and well fitted by an autoregressive process with a lag of one month. A simple modification of the way in which sea level height anomaly is represented in terms of the state vector for comparison to observation results in a slight reduction in the temporal correlation of the innovation sequences and closer fits of the model to the observations, but significant autoregressive structure remains in the innovation sequence. This autoregressive structure represents either a deficiency in the model or some source of inconsistency in the data.

When an explicit first-order autoregressive model of the innovation sequence is incorporated into the filter, the new innovation sequence is white. In an experiment with the modified filter in which some data were held back from the assimilation process, the sequences of residuals at the withheld stations were also white. To our knowledge, this has not been achieved before in an ocean data assimilation scheme with real data. Implications of our results for improved estimates of model error statistics and evaluation of adequacy of models are discussed in detail.

^{ }

^{ }

^{ }

^{ }

^{ }

^{ }

## Abstract

Cumulus formation and convection initiation are examined near a cold front–dryline “triple point” intersection on 24 May 2002 during the International H_{2}O Project (IHOP). A new Lagrangian objective analysis technique assimilates in situ measurements using time-dependent Doppler-derived 3D wind fields, providing output 3D fields of water vapor mixing ratio, virtual potential temperature, and lifted condensation level (LCL) and water-saturated (i.e., cloud) volumes on a subdomain of the radar analysis grid. The radar and Lagrangian analyses reveal the presence of along-wind (i.e., longitudinal) and cross-wind (i.e., transverse) roll circulations in the boundary layer (BL). A remarkable finding of the evolving radar analyses is the apparent persistence of both transverse rolls and individual updraft, vertical vorticity, and reflectivity cores for periods of up to 30 min or more while moving approximately with the local BL wind. Satellite cloud images and single-camera ground photogrammetry imply that clouds tend to develop either over or on the downwind edge of BL updrafts, with a tendency for clouds to elongate and dissipate in the downwind direction relative to cloud layer winds due to weakening updrafts and mixing with drier overlying air. The Lagrangian and radar wind analyses support a parcel continuity principle for cumulus formation, which requires that rising moist air parcels achieve their LCL before moving laterally out of the updraft. Cumuli form within penetrative updrafts in the elevated residual layer (ERL) overlying the moist BL east of the triple point, but remain capped by a convection inhibition (CIN)-bearing layer above the ERL. Dropsonde data suggest the existence of a convergence line about 80 km east of the triple point where deep lifting of BL moisture and locally reduced CIN together support convection initiation.

## Abstract

Cumulus formation and convection initiation are examined near a cold front–dryline “triple point” intersection on 24 May 2002 during the International H_{2}O Project (IHOP). A new Lagrangian objective analysis technique assimilates in situ measurements using time-dependent Doppler-derived 3D wind fields, providing output 3D fields of water vapor mixing ratio, virtual potential temperature, and lifted condensation level (LCL) and water-saturated (i.e., cloud) volumes on a subdomain of the radar analysis grid. The radar and Lagrangian analyses reveal the presence of along-wind (i.e., longitudinal) and cross-wind (i.e., transverse) roll circulations in the boundary layer (BL). A remarkable finding of the evolving radar analyses is the apparent persistence of both transverse rolls and individual updraft, vertical vorticity, and reflectivity cores for periods of up to 30 min or more while moving approximately with the local BL wind. Satellite cloud images and single-camera ground photogrammetry imply that clouds tend to develop either over or on the downwind edge of BL updrafts, with a tendency for clouds to elongate and dissipate in the downwind direction relative to cloud layer winds due to weakening updrafts and mixing with drier overlying air. The Lagrangian and radar wind analyses support a parcel continuity principle for cumulus formation, which requires that rising moist air parcels achieve their LCL before moving laterally out of the updraft. Cumuli form within penetrative updrafts in the elevated residual layer (ERL) overlying the moist BL east of the triple point, but remain capped by a convection inhibition (CIN)-bearing layer above the ERL. Dropsonde data suggest the existence of a convergence line about 80 km east of the triple point where deep lifting of BL moisture and locally reduced CIN together support convection initiation.