# Search Results

## You are looking at 1 - 10 of 17 items for

- Author or Editor: Peter C. McIntosh x

- Refine by Access: All Content x

## Abstract

No abstract available.

## Abstract

No abstract available.

## Abstract

The variational data assimilation method of Bennett and McIntosh permits an assessment of the information content of observational arrays. This method relies on an approximate knowledge of the (linen) dynamics of the system, and on the instrument location, so that the efficiency arrays may be ascertained before deployment. The cumbersome trial-and-error optimization of array configuration may be circumvented by a careful study of the physics underlying the data assimilation procedure. As an example, array design criteria are developed for observing barotropic tides using tide gauges. In an open-ended channel, the variational method endeavors to synthesize the free modes of an infinite channel. Instruments should be located to facilitate the resolution of these modes. In a channel that is too small to support propagating Poincaré waves. tide gaups are best deployed along open sea boundaries where the amplitude of exponentially decaying Poincaré waves is largest. The maximum number of instruments contributing independent information depends both on the accuracy of the numerical modeling involved and on the data accuracy. Hence the desired numerical resolution may be used to determine the optimum number of measurements (for a given instrument uncertainty), or vice versa.

## Abstract

The variational data assimilation method of Bennett and McIntosh permits an assessment of the information content of observational arrays. This method relies on an approximate knowledge of the (linen) dynamics of the system, and on the instrument location, so that the efficiency arrays may be ascertained before deployment. The cumbersome trial-and-error optimization of array configuration may be circumvented by a careful study of the physics underlying the data assimilation procedure. As an example, array design criteria are developed for observing barotropic tides using tide gauges. In an open-ended channel, the variational method endeavors to synthesize the free modes of an infinite channel. Instruments should be located to facilitate the resolution of these modes. In a channel that is too small to support propagating Poincaré waves. tide gaups are best deployed along open sea boundaries where the amplitude of exponentially decaying Poincaré waves is largest. The maximum number of instruments contributing independent information depends both on the accuracy of the numerical modeling involved and on the data accuracy. Hence the desired numerical resolution may be used to determine the optimum number of measurements (for a given instrument uncertainty), or vice versa.

## Abstract

Tracer conservation equations may be inverted to determine the flow field and macroscopic diffusion coefficients from known tracer distributions. An underdetermined system leads to an infinite number of possible solutions. The solution that is selected is the one that is as smooth as possible while still reproducing the tracer observations. The procedure suggested here is to define a penalty function that balances solution smoothnness, based on spatial derivatives of the solution, against residuals in the conservation equations. The ratio of detail in the solution to equation error is controlled by one or more smoothing parameters, which will not usually be known prior to the inversion. A parameter estimation technique known as generalized cross-validation is used to determine the degree of smoothing based on optimizing the prediction of withheld information. The method is tested for the case of steady flow containing a range of spatial scales in a two-dimensional channel with a spatially varying diffusion coefficient. It is shown that the correct flow field and diffusivity may be reproduced relatively accurately from a knowledge of the distribution of two tracers for a variety of flow configurations. The impact on the solution of errors in the equations and errors in the tracer data is studied. It is found that relatively large (correlated) errors in the equations due to numerical truncation error have the same effect as relatively small random errors in the data. A useful qualitative diagnostic measure of the value of an inverse solution is introduced. It is a measure of the loss of independent information due to smoothing the solution and is related to the data resolution matrix of classical discrete inverse theory.

## Abstract

Tracer conservation equations may be inverted to determine the flow field and macroscopic diffusion coefficients from known tracer distributions. An underdetermined system leads to an infinite number of possible solutions. The solution that is selected is the one that is as smooth as possible while still reproducing the tracer observations. The procedure suggested here is to define a penalty function that balances solution smoothnness, based on spatial derivatives of the solution, against residuals in the conservation equations. The ratio of detail in the solution to equation error is controlled by one or more smoothing parameters, which will not usually be known prior to the inversion. A parameter estimation technique known as generalized cross-validation is used to determine the degree of smoothing based on optimizing the prediction of withheld information. The method is tested for the case of steady flow containing a range of spatial scales in a two-dimensional channel with a spatially varying diffusion coefficient. It is shown that the correct flow field and diffusivity may be reproduced relatively accurately from a knowledge of the distribution of two tracers for a variety of flow configurations. The impact on the solution of errors in the equations and errors in the tracer data is studied. It is found that relatively large (correlated) errors in the equations due to numerical truncation error have the same effect as relatively small random errors in the data. A useful qualitative diagnostic measure of the value of an inverse solution is introduced. It is a measure of the loss of independent information due to smoothing the solution and is related to the data resolution matrix of classical discrete inverse theory.

## Abstract

The original time domain analysis of data from the Australian Coastal Experiment involved fitting coastal-trapped wave modes to an array of velocity time series using a truncated singular value decomposition. While the truncation was necessary for noise reduction, it is shown that important information concerning the separation of mode 1 and mode 2 was discarded. A weighted least-squares mode-fitting technique is introduced that uses the data to estimate both the signal-to-noise ratio and the relative weighting of the fitted modes. In addition, the velocity data are augmented by sea-level data.

Findings from the present analysis differ in several important respects from the original results. It is found that mode 1 has approximately twice the energy flux of mode 2 and that mode 3 is statistically insignificant at the southern end of the East Australian waveguide. In addition, mode 1 is not highly correlated with mode 2. These differences are primarily due to changes in mode 1; mode 2 remains essentially unchanged from the original analysis. These revised modes, when used as boundary conditions to a wind-forced coastal-trapped wave model that predicts velocity and sea level along the coast, lead to a small but significant increase in prediction skill over the original modes. The reanalysis raises questions regarding the energy source for the coastal-trapped wave modes.

The difference between the original and present analyses is reduced by the inclusion of sea-level data. The ability of the instrument array to resolve coastal-trapped wave modes is discussed, and the problems associated with nonorthogonality of the theoretical modal structures as sampled by the array are highlighted. It is noted that the small number of degrees of freedom in the data leads to 95% confidence limits on modal energy fluxes that are as large as 69% of the estimated values.

## Abstract

The original time domain analysis of data from the Australian Coastal Experiment involved fitting coastal-trapped wave modes to an array of velocity time series using a truncated singular value decomposition. While the truncation was necessary for noise reduction, it is shown that important information concerning the separation of mode 1 and mode 2 was discarded. A weighted least-squares mode-fitting technique is introduced that uses the data to estimate both the signal-to-noise ratio and the relative weighting of the fitted modes. In addition, the velocity data are augmented by sea-level data.

Findings from the present analysis differ in several important respects from the original results. It is found that mode 1 has approximately twice the energy flux of mode 2 and that mode 3 is statistically insignificant at the southern end of the East Australian waveguide. In addition, mode 1 is not highly correlated with mode 2. These differences are primarily due to changes in mode 1; mode 2 remains essentially unchanged from the original analysis. These revised modes, when used as boundary conditions to a wind-forced coastal-trapped wave model that predicts velocity and sea level along the coast, lead to a small but significant increase in prediction skill over the original modes. The reanalysis raises questions regarding the energy source for the coastal-trapped wave modes.

The difference between the original and present analyses is reduced by the inclusion of sea-level data. The ability of the instrument array to resolve coastal-trapped wave modes is discussed, and the problems associated with nonorthogonality of the theoretical modal structures as sampled by the array are highlighted. It is noted that the small number of degrees of freedom in the data leads to 95% confidence limits on modal energy fluxes that are as large as 69% of the estimated values.

## Abstract

The zonally averaged circulation in the atmosphere or ocean can be misleading if the averaging is performed at constant height. In the ocean there is apparently anomalously large diapycnal motion forming the so-called Deacon cell. The atmospheric equivalents are the Ferrel cells. There are two zonal averaging techniques commonly used to avoid these spurious cells. One involves averaging at constant density, and this technique has been used in both fluids. The other technique, which has so far been applied only in the atmosphere, involves taking into account perturbation correlation terms to form the residual mean circulation. Using a Taylor series expansion, we show that these apparently dissimilar techniques are formally equivalent at leading order in perturbation amplitude. The equivalence is demonstrated using output from the FRAM Southern Ocean numerical model.

## Abstract

The zonally averaged circulation in the atmosphere or ocean can be misleading if the averaging is performed at constant height. In the ocean there is apparently anomalously large diapycnal motion forming the so-called Deacon cell. The atmospheric equivalents are the Ferrel cells. There are two zonal averaging techniques commonly used to avoid these spurious cells. One involves averaging at constant density, and this technique has been used in both fluids. The other technique, which has so far been applied only in the atmosphere, involves taking into account perturbation correlation terms to form the residual mean circulation. Using a Taylor series expansion, we show that these apparently dissimilar techniques are formally equivalent at leading order in perturbation amplitude. The equivalence is demonstrated using output from the FRAM Southern Ocean numerical model.

## Abstract

The time-averaged density conservation equation in *z* coordinates contains a forcing term that is the divergence of the transient eddy fluxes. These fluxes are due to the temporal correlation between the instantaneous velocity and density fields. Even when the instantaneous motion is adiabatic and the flow is statistically steady, the divergence of these eddy fluxes is nonzero, thereby causing the time-averaged flow to have an apparently diabatic component. That is, the time-averaged velocity has a component through the time-averaged density contours. Here a modified time-averaged velocity is derived that has a diabatic component only when there are genuine diabatic processes occurring or when the flow is statistically unsteady. This modified velocity is the sum of the usual Eulerian time-averaged velocity and an extra advection due to transient eddies. It is analogous to the residual-mean velocity defined for zonally averaged flows and is therefore termed the temporal-residual-mean (TRM) velocity.

The authors also derive the time-averaged conservation equation for a simple tracer, which is a function only of density. In the absence of diabatic mixing processes and if the flow is statistically steady, the TRM circulation is shown to advect the tracer value averaged along density surfaces, not the tracer value averaged at constant height. This result has implications for the way in which datasets or numerical model output should be averaged and analyzed. The results of this paper apply to both the atmosphere and the ocean or, indeed, to any turbulent stratified fluid.

## Abstract

The time-averaged density conservation equation in *z* coordinates contains a forcing term that is the divergence of the transient eddy fluxes. These fluxes are due to the temporal correlation between the instantaneous velocity and density fields. Even when the instantaneous motion is adiabatic and the flow is statistically steady, the divergence of these eddy fluxes is nonzero, thereby causing the time-averaged flow to have an apparently diabatic component. That is, the time-averaged velocity has a component through the time-averaged density contours. Here a modified time-averaged velocity is derived that has a diabatic component only when there are genuine diabatic processes occurring or when the flow is statistically unsteady. This modified velocity is the sum of the usual Eulerian time-averaged velocity and an extra advection due to transient eddies. It is analogous to the residual-mean velocity defined for zonally averaged flows and is therefore termed the temporal-residual-mean (TRM) velocity.

The authors also derive the time-averaged conservation equation for a simple tracer, which is a function only of density. In the absence of diabatic mixing processes and if the flow is statistically steady, the TRM circulation is shown to advect the tracer value averaged along density surfaces, not the tracer value averaged at constant height. This result has implications for the way in which datasets or numerical model output should be averaged and analyzed. The results of this paper apply to both the atmosphere and the ocean or, indeed, to any turbulent stratified fluid.

## Abstract

Mesoscale eddies mix fluid parcels in a way that is highly constrained by the stratified nature of the fluid. The temporal-residual-mean (TRM) theory provides the link between the different views that are apparent from temporally averaging these turbulent flow fields in height coordinates and in density coordinates. Here the original TRM theory is modified so that it applies to unsteady flows. This requires a modification not only to the streamfunction (and hence the velocity vector) but also a specific interpretation of the density field; it is not the Eulerian-mean density. The TRM theory reduces the problem of parameterizing the eddy flux from three dimensions to two dimensions. The three-dimensional TRM velocity is shown to be the same as is obtained by averaging with respect to instantaneous density surfaces and the averaged conservation equations in height coordinates and in density coordinates are the same except for a nondivergent flux that is identified and explained. The TRM theory demonstrates that the tracers (such as salinity and potential temperature) that are carried by an eddyless ocean model must be interpreted as the thickness-weighted tracers that result from averaging in density coordinates.

The extra streamfunction of the temporal-residual-mean flow, termed the quasi-Stokes streamfunction, has a simple interpretation that proves valuable in developing plausible boundary conditions for this streamfunction: at any height *z,* the quasi-Stokes streamfunction is the contribution of temporal perturbations to the horizontal transport of water that is more dense than the density of the surface having time-mean height *z.* Importantly, the extra three-dimensional velocity derived from the quasi-Stokes streamfunction is not the bolus transport that arises when averaging in density coordinates. Therefore the Gent and McWilliams eddy parameterization scheme is not a parameterization of the bolus velocity but rather of the quasi-Stokes velocity of the temporal-residual-mean circulation. The physical interpretation of the quasi-Stokes streamfunction implies that it must be tapered smoothly to zero at the top and bottom of the ocean rather than having delta functions of velocity against these boundaries. The common assumption of downgradient flux of potential vorticity along isopycnals is discussed and it is shown that this does not sufficiently constrain the three-dimensional quasi-Stokes advection because only the vertical derivative of the quasi-Stokes streamfunction is specified. Near-boundary uncertainty in the potential vorticity fluxes translates into uncertainty in the depth-averaged heat flux. The horizontal TRM momentum equation is derived and leads to an alternative method for including the effects of eddies in eddyless models.

## Abstract

Mesoscale eddies mix fluid parcels in a way that is highly constrained by the stratified nature of the fluid. The temporal-residual-mean (TRM) theory provides the link between the different views that are apparent from temporally averaging these turbulent flow fields in height coordinates and in density coordinates. Here the original TRM theory is modified so that it applies to unsteady flows. This requires a modification not only to the streamfunction (and hence the velocity vector) but also a specific interpretation of the density field; it is not the Eulerian-mean density. The TRM theory reduces the problem of parameterizing the eddy flux from three dimensions to two dimensions. The three-dimensional TRM velocity is shown to be the same as is obtained by averaging with respect to instantaneous density surfaces and the averaged conservation equations in height coordinates and in density coordinates are the same except for a nondivergent flux that is identified and explained. The TRM theory demonstrates that the tracers (such as salinity and potential temperature) that are carried by an eddyless ocean model must be interpreted as the thickness-weighted tracers that result from averaging in density coordinates.

The extra streamfunction of the temporal-residual-mean flow, termed the quasi-Stokes streamfunction, has a simple interpretation that proves valuable in developing plausible boundary conditions for this streamfunction: at any height *z,* the quasi-Stokes streamfunction is the contribution of temporal perturbations to the horizontal transport of water that is more dense than the density of the surface having time-mean height *z.* Importantly, the extra three-dimensional velocity derived from the quasi-Stokes streamfunction is not the bolus transport that arises when averaging in density coordinates. Therefore the Gent and McWilliams eddy parameterization scheme is not a parameterization of the bolus velocity but rather of the quasi-Stokes velocity of the temporal-residual-mean circulation. The physical interpretation of the quasi-Stokes streamfunction implies that it must be tapered smoothly to zero at the top and bottom of the ocean rather than having delta functions of velocity against these boundaries. The common assumption of downgradient flux of potential vorticity along isopycnals is discussed and it is shown that this does not sufficiently constrain the three-dimensional quasi-Stokes advection because only the vertical derivative of the quasi-Stokes streamfunction is specified. Near-boundary uncertainty in the potential vorticity fluxes translates into uncertainty in the depth-averaged heat flux. The horizontal TRM momentum equation is derived and leads to an alternative method for including the effects of eddies in eddyless models.

## Abstract

The performance of a box inverse model is tested using output from a near-eddy-resolving numerical model. Conservation equations are written in isopycnal layers for three properties: mass, heat, and salt anomaly. If the equations are free of error and the vertical exchange of properties between layers is negligible or known, the reference level velocity structure is quite accurately reproduced despite the underdetermined nature of the problem. If the interlayer fluxes of properties are not negligible and they are ignored, the solution for the reference level velocities is poor. If the interlayer fluxes of properties are included as additional unknowns in the inversion, they can be accurately estimated provided the column weights are chosen appropriately. Column weights that minimize the ratio of largest to smallest singular value (the “condition number”) result in the best solutions for interfacial fluxes, and generally also for lateral fluxes. This choice of column weights also makes the inversion insensitive to data error: Inversions containing typical errors can be solved at full rank, obviating the need to estimate the rank. The choice of number of layers, and whether these layers are isopycnals or geopotentials, does not affect the accuracy of the inversion provided that interlayer fluxes are included as unknowns in the inversion. A reasonable estimate of solution accuracy is available by using the statistical approach to inverse problems, although this method can be sensitive to the choice of prior statistics.

Box inverse models do work, provided that they include interfacial fluxes as unknowns and that these are weighted appropriately. Such a model can successfully determine interfacial fluxes and, in some cases, horizontal fluxes. However, the model will not generally reproduce the detailed structure of the reference level velocities.

## Abstract

The performance of a box inverse model is tested using output from a near-eddy-resolving numerical model. Conservation equations are written in isopycnal layers for three properties: mass, heat, and salt anomaly. If the equations are free of error and the vertical exchange of properties between layers is negligible or known, the reference level velocity structure is quite accurately reproduced despite the underdetermined nature of the problem. If the interlayer fluxes of properties are not negligible and they are ignored, the solution for the reference level velocities is poor. If the interlayer fluxes of properties are included as additional unknowns in the inversion, they can be accurately estimated provided the column weights are chosen appropriately. Column weights that minimize the ratio of largest to smallest singular value (the “condition number”) result in the best solutions for interfacial fluxes, and generally also for lateral fluxes. This choice of column weights also makes the inversion insensitive to data error: Inversions containing typical errors can be solved at full rank, obviating the need to estimate the rank. The choice of number of layers, and whether these layers are isopycnals or geopotentials, does not affect the accuracy of the inversion provided that interlayer fluxes are included as unknowns in the inversion. A reasonable estimate of solution accuracy is available by using the statistical approach to inverse problems, although this method can be sensitive to the choice of prior statistics.

Box inverse models do work, provided that they include interfacial fluxes as unknowns and that these are weighted appropriately. Such a model can successfully determine interfacial fluxes and, in some cases, horizontal fluxes. However, the model will not generally reproduce the detailed structure of the reference level velocities.

## Abstract

Daily rainfall during the April–October growing season in a major cropping region of southeastern Australia has been related to particular types of synoptic weather systems over a period of 33 yr. The analysis reveals that cutoff lows were responsible for at least 50% of all growing-season rainfall and accounted for 80% of daily rainfall events exceeding 25 mm per station. The proportion of rainfall contributed by cutoff lows varies throughout the growing season. It is highest in austral autumn and spring (55% and 57%, respectively) and falls to a minimum in July (42%). By way of contrast, the total contribution of all types of frontal systems to growing-season rainfall is about 32%, although the monthly value reaches a maximum of 41% in July when mean cutoff rainfall reaches a minimum. Rainfall associated with fronts is strongly concentrated in the lower range of daily falls (less than 10 mm per station). Frontal rainfall is found to be more consistent from year to year than is cutoff rainfall. The number of cutoff lows per season is highly variable, and there is a significant correlation between the number of cutoff days and atmospheric blocking in the region south of Australia in each month of the growing season. The mean amount of rainfall per cutoff day is also variable and has declined by approximately 0.8 mm over the analysis period. An understanding of the mechanisms controlling year-to-year variability of cutoff rainfall is therefore an important step in improving seasonal forecasts for agriculture in southeastern Australia.

## Abstract

Daily rainfall during the April–October growing season in a major cropping region of southeastern Australia has been related to particular types of synoptic weather systems over a period of 33 yr. The analysis reveals that cutoff lows were responsible for at least 50% of all growing-season rainfall and accounted for 80% of daily rainfall events exceeding 25 mm per station. The proportion of rainfall contributed by cutoff lows varies throughout the growing season. It is highest in austral autumn and spring (55% and 57%, respectively) and falls to a minimum in July (42%). By way of contrast, the total contribution of all types of frontal systems to growing-season rainfall is about 32%, although the monthly value reaches a maximum of 41% in July when mean cutoff rainfall reaches a minimum. Rainfall associated with fronts is strongly concentrated in the lower range of daily falls (less than 10 mm per station). Frontal rainfall is found to be more consistent from year to year than is cutoff rainfall. The number of cutoff lows per season is highly variable, and there is a significant correlation between the number of cutoff days and atmospheric blocking in the region south of Australia in each month of the growing season. The mean amount of rainfall per cutoff day is also variable and has declined by approximately 0.8 mm over the analysis period. An understanding of the mechanisms controlling year-to-year variability of cutoff rainfall is therefore an important step in improving seasonal forecasts for agriculture in southeastern Australia.

## Abstract

The economic value of seasonal climate forecasting is assessed using a whole-of-chain analysis. The entire system, from sea surface temperature (SST) through pasture growth and animal production to economic and resource outcomes, is examined. A novel statistical forecast method is developed using the partial least squares spatial correlation technique with near-global SST. This method permits forecasts to be tailored for particular regions and industries. The method is used to forecast plant growth days rather than rainfall. Forecast skill is measured by performing a series of retrospective forecasts (hindcasts) over the previous century. The hindcasts are cross-validated to guard against the possibility of artificial skill, so there is no skill at predicting random time series. The hindcast skill is shown to be a good estimator of the true forecast skill obtained when only data from previous years are used in developing the forecast.

Forecasts of plant growth, reduced to three categories, are used in several agricultural examples in Australia. For the northeast Queensland grazing industry, the economic value of this forecast is shown to be greater than that of a Southern Oscillation index (SOI) based forecast and to match or exceed the value of a “perfect” category rainfall forecast. Reasons for the latter surprising result are given. Resource degradation, in this case measured by soil loss, is shown to remain insignificant despite increasing production from the land. Two further examples in Queensland, one for the cotton industry and one for wheat, are illustrated in less depth. The value of a forecast is again shown to match or exceed that obtained using the SOI, although further investigation of the decision-making responses to forecasts is needed to extract the maximum benefit for these industries.

## Abstract

The economic value of seasonal climate forecasting is assessed using a whole-of-chain analysis. The entire system, from sea surface temperature (SST) through pasture growth and animal production to economic and resource outcomes, is examined. A novel statistical forecast method is developed using the partial least squares spatial correlation technique with near-global SST. This method permits forecasts to be tailored for particular regions and industries. The method is used to forecast plant growth days rather than rainfall. Forecast skill is measured by performing a series of retrospective forecasts (hindcasts) over the previous century. The hindcasts are cross-validated to guard against the possibility of artificial skill, so there is no skill at predicting random time series. The hindcast skill is shown to be a good estimator of the true forecast skill obtained when only data from previous years are used in developing the forecast.

Forecasts of plant growth, reduced to three categories, are used in several agricultural examples in Australia. For the northeast Queensland grazing industry, the economic value of this forecast is shown to be greater than that of a Southern Oscillation index (SOI) based forecast and to match or exceed the value of a “perfect” category rainfall forecast. Reasons for the latter surprising result are given. Resource degradation, in this case measured by soil loss, is shown to remain insignificant despite increasing production from the land. Two further examples in Queensland, one for the cotton industry and one for wheat, are illustrated in less depth. The value of a forecast is again shown to match or exceed that obtained using the SOI, although further investigation of the decision-making responses to forecasts is needed to extract the maximum benefit for these industries.