Search Results
-contained, none of these supplemental navigation corrections are available. In this paper, we summarize the results of a set of field tests in which the Cobra-Tac was used to map out known paths. The tests show that navigation error results primarily from heading-dependent error in the Cobra-Tac internal compass. An analysis of this error is presented and a postprocessing correction method is proposed that significantly improves Cobra-Tac positional records. Finally, errors before and after the correction are
-contained, none of these supplemental navigation corrections are available. In this paper, we summarize the results of a set of field tests in which the Cobra-Tac was used to map out known paths. The tests show that navigation error results primarily from heading-dependent error in the Cobra-Tac internal compass. An analysis of this error is presented and a postprocessing correction method is proposed that significantly improves Cobra-Tac positional records. Finally, errors before and after the correction are
. 2009 ). These methods are suboptimal, but the impacts on analysis errors can be mitigated with careful tuning. However, the “optimal” tuning for one data sample may not be optimal for another since the true error statistics vary in space and time. A number of authors have described various approaches to thinning, including Purser et al. (2000) , Ochotta et al. (2005) , Ramachandran et al. (2005) , and Lazarus et al. (2010) , and its impact on DA, including Dando et al. (2007) , Li et al
. 2009 ). These methods are suboptimal, but the impacts on analysis errors can be mitigated with careful tuning. However, the “optimal” tuning for one data sample may not be optimal for another since the true error statistics vary in space and time. A number of authors have described various approaches to thinning, including Purser et al. (2000) , Ochotta et al. (2005) , Ramachandran et al. (2005) , and Lazarus et al. (2010) , and its impact on DA, including Dando et al. (2007) , Li et al
1. Introduction The development of tangent linear model (TLM) and its adjoint in numerical weather prediction makes it possible to calculate corrections to the initial conditions that improve the accuracy of short- to medium-range forecasts (e.g., Klinker et al. 1998 ; Pu et al. 1997 ; Gelaro et al. 1998 ). One such technique implemented at the Canadian Meteorological Centre (CMC; see Laroche et al. 2002 ) is the key analysis error algorithm ( Klinker et al. 1998 ). Until recently, it was
1. Introduction The development of tangent linear model (TLM) and its adjoint in numerical weather prediction makes it possible to calculate corrections to the initial conditions that improve the accuracy of short- to medium-range forecasts (e.g., Klinker et al. 1998 ; Pu et al. 1997 ; Gelaro et al. 1998 ). One such technique implemented at the Canadian Meteorological Centre (CMC; see Laroche et al. 2002 ) is the key analysis error algorithm ( Klinker et al. 1998 ). Until recently, it was
characteristics. Aside from the research on ground validation and characteristic analysis of different precipitation products, the quantitative error analysis and mathematical modeling are more conducive to precipitation data applications like optimal multisource data fusion. Both ground-based and spaceborne radar precipitation estimation data have their own error structure characteristics. How to accurately describe the systematic and random errors of precipitation estimation is the key problem to obtain
characteristics. Aside from the research on ground validation and characteristic analysis of different precipitation products, the quantitative error analysis and mathematical modeling are more conducive to precipitation data applications like optimal multisource data fusion. Both ground-based and spaceborne radar precipitation estimation data have their own error structure characteristics. How to accurately describe the systematic and random errors of precipitation estimation is the key problem to obtain
1. Introduction Spatial analysis of observations, also called gridding, is a common task in oceanography and meteorology, and a series of methods and implementations exists and is widely used. Here N d data points of values d i , i = 1, …, N d at location ( x i , y i ) are generally distributed unevenly in space. Furthermore, the values of d i are affected by observational errors, including representativity errors. From this dataset an analysis on a regular grid is often desired. It
1. Introduction Spatial analysis of observations, also called gridding, is a common task in oceanography and meteorology, and a series of methods and implementations exists and is widely used. Here N d data points of values d i , i = 1, …, N d at location ( x i , y i ) are generally distributed unevenly in space. Furthermore, the values of d i are affected by observational errors, including representativity errors. From this dataset an analysis on a regular grid is often desired. It
proposed by Houtekamer et al. (1996) . This method relies on the time evolution of some perturbations that are constructed to be consistent with the involved error contributions. There are three basic steps or components that can be involved in the time evolution. First, the analysis scheme is applied to some perturbed observations and to a perturbed background, which provides a perturbed analysis: this simulates the effect of the two information errors (namely, the observation errors and the
proposed by Houtekamer et al. (1996) . This method relies on the time evolution of some perturbations that are constructed to be consistent with the involved error contributions. There are three basic steps or components that can be involved in the time evolution. First, the analysis scheme is applied to some perturbed observations and to a perturbed background, which provides a perturbed analysis: this simulates the effect of the two information errors (namely, the observation errors and the
process. Overlapping streams are fundamentally equivalent to two realizations of the same cycling data assimilation system with the same observations but with two different initial conditions. Unlike free-running model cases where chaos can quickly act to diverge the two realizations, the cycling process should, in an optimal scenario, act to gradually constrain the two states toward the same analytical solution. This solution might still be far from the true state (i.e., has analysis error) but the
process. Overlapping streams are fundamentally equivalent to two realizations of the same cycling data assimilation system with the same observations but with two different initial conditions. Unlike free-running model cases where chaos can quickly act to diverge the two realizations, the cycling process should, in an optimal scenario, act to gradually constrain the two states toward the same analytical solution. This solution might still be far from the true state (i.e., has analysis error) but the
literature, some considerations for flux experiment designs are based on correlation scales of the flux variable, but we have not seen a rigorous error analysis on the flux. In Cuny et al. (2005) , the authors used an array of moorings to estimate the volume, freshwater, and heat fluxes across Davis Strait. They calculated the correlations of temperature, salinity, and current velocity between adjacent moorings. Based on the low correlation values, they determined that the moorings were too sparse to
literature, some considerations for flux experiment designs are based on correlation scales of the flux variable, but we have not seen a rigorous error analysis on the flux. In Cuny et al. (2005) , the authors used an array of moorings to estimate the volume, freshwater, and heat fluxes across Davis Strait. They calculated the correlations of temperature, salinity, and current velocity between adjacent moorings. Based on the low correlation values, they determined that the moorings were too sparse to
of reflectivity can cause range location errors (due to a signal processing artifact) and developed a technique to correct for these errors. It should be noted that a treatment of error analysis can be found in Atlas et al. (1973) . The work presented by Atlas et al. (1973) , however, focuses on the error in the DSD due to a given error in vertical air motion w. The work presented here examines the error in rainfall rate due to vertical air motion w, different fall speed relationships
of reflectivity can cause range location errors (due to a signal processing artifact) and developed a technique to correct for these errors. It should be noted that a treatment of error analysis can be found in Atlas et al. (1973) . The work presented by Atlas et al. (1973) , however, focuses on the error in the DSD due to a given error in vertical air motion w. The work presented here examines the error in rainfall rate due to vertical air motion w, different fall speed relationships
was estimated from the gauge data. Following Habib and Krajewski (2002) , an error variance separation analysis is used to explain the proportion of variance of the radar–gauge differences that could be attributed to the point-to-area variance of the gauges. The “radar error,” as defined by Habib and Krajewski (2002) , is also calculated along with estimates of the parameterization and measurement errors to see how much of the radar error can be explained by the aforementioned errors. This is
was estimated from the gauge data. Following Habib and Krajewski (2002) , an error variance separation analysis is used to explain the proportion of variance of the radar–gauge differences that could be attributed to the point-to-area variance of the gauges. The “radar error,” as defined by Habib and Krajewski (2002) , is also calculated along with estimates of the parameterization and measurement errors to see how much of the radar error can be explained by the aforementioned errors. This is