## 1. Introduction

This paper describes an accurate automated technique of terrestrial^{1} photogrammetry that applies to weather images obtained in uncontrolled circumstances. Traditionally, photogrammetry has been performed on carefully obtained images from special airborne cameras with carefully calibrated focal length and orientation (Slama 1980). In meteorological terrestrial photogrammetry, images have provided much useful information. For example, Hoecker (1960), Golden and Purcell (1978), and others cited in Bluestein and Golden (1993) have measured wind speeds in tornadoes photogrammetrically. Wakimoto and Bringi (1988) utilized radar data overlaid on still photographs to document the development and descent of precipitation in deep cumulus clouds during the Microburst and Severe Thunderstorm (MIST) project. Colorado microbursts were similarly analyzed using still photographs (Wakimoto et al. 1994). A set of images of a Colorado tornado was overlaid with Doppler radar data in the analysis of Wakimoto and Martner (1992). These studies were performed with telephoto images obtained under relatively controlled circumstances: that is, known focal length lenses and negligible difference between the lens and visible horizons (for definitions of terms, see Table 1). This is not always the case with terrestrial weather photographs. These investigators used established photogrammetry procedures (Holle 1982, hereafter H82) that apply only to objects with small angular displacements from the principal axis of the lens and assume that the camera is held perfectly horizontally. In other words, the roll angle of the camera, or the angle at the principal point between the “vertical” side on the photograph and true vertical in object space, is assumed to be identically zero. Saunders (1963) outlined a more general technique that involves finding the lens horizon by a manual trial-and-error method. The fast computer algorithm presented in this paper is essentially an automated and more precise version of Saunders' method with the labor-intensive manual iterations replaced by convergent iterative solutions of the photogrammetric equations.

The methods in this paper apply to movie or video images as well as still images. Historically, movies have been used to assess the motion of cloud or debris by comparing positions of a feature between frames exposed at a known time interval [e.g., Golden and Purcell (1978), Hoecker (1960), and references cited by Bluestein and Golden (1993)]. Much of the literature concerning motion picture weather photogrammetry is informal and will not be cited here. In most cases the small-angle (linear scaling) approximation described in section 3 was appropriately utilized for image scaling, and prephotogrammetry determination of landmark azimuth and elevation was used for image orientation.

In recent field programs such as the Verification of Rotation in Tornadoes Experiment (VORTEX; Rasmussen et al. 1994) numerous photographs and video images were obtained and proved very useful for deducing cloud and tornado locations. These photographs were typically obtained with unknown focal length and camera orientation, unmarked principal point, and poorly calibrated image placement with respect to the camera optical axis. Often, cameras were hand-held with little attention to careful orientation with respect to the horizon. The cameras used lenses with a variety of focal lengths that often changed between exposures. Exact camera position was seldom recorded. Photographic parameters are not recorded at the time either by the general public or by scientists in the stressful, rapidly changing circumstances of a severe storm intercept. Thus the parameters have to be deduced a posteriori using information gained from revisiting the camera site with surveying equipment.

Very little information is present in the formal meteorological literature concerning techniques for single-camera or multicamera photogrammetry of images obtained in these uncontrolled circumstances. For that matter, papers in the formal literature rarely address issues such as the actual location of the horizon line, camera roll angle about the optical axis, or other crucial details of photogrammetric analysis. This paper describes a new algorithm for analyzing terrestrial weather images obtained with any type of lens (telephoto, normal, or wide angle). The input for this algorithm consists of measurements obtained in prephotogrammetry surveys from the camera site (section 2). The information that must be obtained in these surveys consists of locating the exact camera position and from this point measuring precisely the azimuth and elevation angles of landmarks that appear in the images. The mathematical method for retrieving focal length, principal azimuth, and camera tilt and roll is developed in section 3. Once these parameters are found, the azimuth and elevation angle of any feature in the image can be determined. The scale distortion inherent in photographs obtained with wide-angle lenses is accommodated by the nonlinear equations developed herein. The algorithm is tested in section 4 using a telephoto image obtained during VORTEX and also simulated wide-angle photography. The range of a visible feature from a camera is unknown unless further information is available such as the map of the damage path in the case of a tornado or a simultaneous image from a second camera with a different viewing angle. Section 5 describes a search method used in analyses of VORTEX data for locating the same feature in photographs from different directions and then deducing its 3D position.

## 2. The prephotogrammetry survey

A prephotogrammetric field survey is unnecessary in the ideal situation when full images are obtained with a special camera and the camera's exact Cartesian and angular coordinates are measured and recorded. The special camera would have a fixed and calibrated focal length and would mark the position of the principal point on the image (Slama 1980). Such controlled procedures are impractical when pursuing rare, short-lived phenomena such as tornadoes because of time constraints, the expense of special cameras, and the need for zoom lenses to obtain an optimum field of view for a given situation. The meteorologist has even less control of the data gathering if photography is obtained from the public, rather than as part of a scientific project. Thus focal length, camera orientation, and sometimes camera location are unknowns, which can be deduced accurately only through information acquired in a prephotogrammetric survey. Lacking a survey, the analyst typically assumes that the visible horizon line coincides with the lens horizon and that this line can then be used to determine camera orientation. However, this only works with a flat horizon at the same elevation as the camera (a condition that is hard to verify).

We now present guidelines for conducting a prephotogrammetry survey that are sufficient to obtain the data needed for scaling and orienting a weather image. The goals of the survey are to determine the camera location, as well as azimuths and elevations of landmarks visible in the image(s). Further, information must be obtained so that field-measured azimuths can be made earth relative in later analysis. Certain equipment is essential for the survey, including a global positioning system (GPS) receiver and measuring wheels to determine camera location, and a survey transit capable of measuring azimuth and elevation with an accuracy of 1′ of arc (0.0167°). Also essential are prints or traces of the original photography with marked identifiable landmarks, and a camera of fixed focal length.

First, the exact camera location must be determined. This can be facilitated in a field project if the location is marked at the time of the original photography by spray painting the ground or recording the location from GPS (the former is presently more accurate and may be essential if nearby, tall landmarks are used for image scaling and orientation). During a survey, the camera site is located through simple comparison of perspective between foreground and background objects in the photograph. In practice, a camera site can usually be determined to within 1 m unless the image is lacking in landmarks (which occurs most often in telephoto images). After the site is found, it is necessary to determine the geographical location of the site, preferably with GPS. For gross error checking, distance to nearby landmarks can be measured with a measuring wheel. Further, by measuring the azimuths (as described below) of distant landmarks of a known location, triangulation can be used to validate the camera location information.

After establishing the camera location and leveling the instrument, it is necessary to measure the azimuth and elevation angles (*ϕ,* *θ*) of as many landmarks as practical in order to minimize errors (section 3). A survey instrument is placed as close as possible to the actual camera location, and it is leveled very precisely because both azimuth and elevation of landmarks will be measured. In practice, a sketch is first made of the scene showing and numbering the approximate location of the landmarks in the image, and a numbered list is prepared that describes the landmark (e.g., “top of leaning power pole”). The actual location of the survey point must be described because both azimuth and elevation will be measured. Increasing the number of landmarks that are utilized increases the confidence with which both subjective and objective scaling and orientation can be performed. In practical experience, it has proven valuable to have two people involved in the survey, each making independent measurements in order to catch gross errors.

The azimuths obtained in the field are measured relative to the survey instrument because it is not generally possible to know the orientation of true north during the survey. In order to determine earth-relative azimuths in later processing, it is possible to locate two or more “reference landmarks” and measure the azimuth of these during the field survey. Reference landmarks should be tall objects such as transmission towers that are visible from many or all of the camera sites that are being surveyed. It is imperative that the actual location of the reference landmarks be ascertained (through topographic maps or GPS). Then, using the known camera and reference landmark locations, the actual azimuth of the reference landmark can be computed to within a fraction of a degree, and by comparing this value with the measured azimuth, the azimuth bias of the survey instrument at the camera site can be determined.

The preceding is sufficient for scaling images in terms of angular separations. Scaling in terms of linear distances requires range information. For example, the horizontal range of tornado debris is determined approximately by the intersection of the object's azimuth with the centerline of the tornado's damage track. Hence a careful survey of the tornado track is also required. The range of cloud features often can be obtained by triangulation (section 5) if there is simultaneous photography from different viewing directions.

## 3. A computer algorithm for ground-based photogrammetry

We now present a mathematical technique for retrieving the parameters describing image orientation and scaling from information gathered during the prephotogrammetry survey, and measurements made in the image. The image orientation (*α,* *ω,* *κ*; principal azimuth, elevation angle, roll angle) and scaling (*f*; focal length) are based on the azimuths and elevation angles (*ϕ,* *θ*) of two or more landmarks that are visible in the image (see Table 2 for a summary of variable definitions).

*X**,

*Y**,

*Z**) be a Cartesian coordinate system in the object space where the

*X**,

*Y**, and

*Z** axes are eastward, northward, and upward, respectively, and let the front nodal point of the camera lens be at (

*X*

^{*}

_{0}

*Y*

^{*}

_{0}

*Z*

^{*}

_{0}

*X** and

*Y** axes clockwise about the

*Z** axis so that the new

*Y*axis is along the principal azimuth

*α,*results in a new coordinate system [

*X,*

*Y,*

*Z*] (Fig. 1a, where square brackets are used to distinguish it from the prime system introduced below) and

*R,*

*ϕ,*

*θ*),

*R*is slant range from the camera,

*ϕ*is the azimuth angle measured clockwise from true north, and

*θ*is the elevation angle:

*X,*

*Y,*

*Z*] and (

*R,*

*ϕ,*

*θ*) are shown in Fig. 1b. The geometry of a camera tilted upward at an elevation or pitch angle

*ω*is shown in Fig. 1c. Rotation of the

*Y*and

*Z*axes about the

*X*axis so that the new

*Y*axis (now

*Y*′ axis) is along the optical axis of the camera lens results in the primed system

*x*′,

*y*′,

*z*′) be a coordinate system in the image space behind the lens with origin at the rear nodal point of the lens and axes in the opposite directions to the (

*X*′,

*Y*′,

*Z*′) system. The image of a distant object at (

*R,*

*ϕ,*

*θ*) is in focus on the film at a distance

*f*behind the rear nodal point where

*f*is the focal length of the lens. By similar triangles in Fig. 1d, the image is at

*x*

*y*

*z*

*fX*

*Y*

*f*

*fZ*

*Y*

*x*′ = 0,

*z*′ = 0. In practice, the image is magnified for more accurate measurements. Henceforth we make the image-space coordinates apply to the enlarged image instead of the actual film by replacing

*f*everywhere by

*F*≡

*Mf*, where

*M*is the magnification. From (4) and (5) the image of an object that is effectively at (∞,

*ϕ,*

*θ*) appears on the film at

*x*′,

*z*′) are given by

*θ*= 0,

*z*′ = −

*F*tan

*ω,*which is the equation of the lens horizon.

*ϕ*

_{c}, on the photograph is given by

*z*

*x*

*ω*

*ϕ*

_{c}

*α*

*F*

*ω*

*ϕ*contours are straight lines that intersect at the image of the zenith point, which is at (

*x*′,

*z*′) = (0,

*F*cot

*ω*) (usually outside the picture). It can be shown that a contour of constant elevation angle, say

*θ*

_{c}, satisfies the equation

*x*′,

*z*′) = {0, −0.5

*F*[tan(

*θ*

_{c}+

*ω*) − tan(

*θ*

_{c}−

*ω*)]} and asymptotic slopes of ±tan

*θ*

_{c}sec

*ω*(1 − tan

^{2}

*θ*

_{c}tan

^{2}

*ω*)

^{−0.5}. The upper (lower) branch of the hyperbola is the contour

*θ*=

*θ*

_{c}(

*θ*= −

*θ*

_{c}).

*x*′ and

*z*′ axis about the

*y*′ axis counterclockwise through an angle of roll or bank

*κ,*and a translation so that the origin of the new coordinate system (

*x*″,

*z*″) is at the center of the print and its

*x*″ and

*z*″ axes are parallel to the horizontal and vertical sides of the photograph, respectively (Fig. 2). The new photographic coordinates are related to the previous ones by

*x*

^{″}

_{P}

*z*

^{″}

_{P}

*ϕ,*

*θ*):

*x*″,

*z*″), is determined by

*κ*= 0, these formulas reduce to Eqs. (4) and (5) in H82. Although Holle's equations are correct for large angles when

*κ*= 0, his method utilizes small-angle approximations in obtaining focal length, principal azimuth, and camera elevation angle. Saunders (1963) measured distances on the print from the principal plane and the lens horizon. His coordinates are given by

*F,*

*α,*

*ω,*and

*κ*are unknown. Consider a grid, consisting of labeled contours of

*ϕ*and

*θ*at 1° intervals, overlaid on the photograph. The parameters

*F,*

*α,*

*ω,*and

*κ*control the mesh size, horizontal and vertical displacement of the grid, and orientation of the grid, respectively. Since lines of constant

*ϕ*intersect in the image at the zenith point, which varies with cot

*ω,*the tilt also affects the shape of the grid. Assume for now that the principal point is at the center of the photograph, as is generally the case. Then

*F,*

*α,*

*ω,*and

*κ*can be determined a posteriori by measuring the azimuth and elevation angles, (

*ϕ*

_{n},

*θ*

_{n}), and the corresponding locations on the photograph (

*x*

^{″}

_{n}

*z*

^{″}

_{n}

*N*landmarks in the photograph where

*N*≥ 2. Since the solution for

*F,*

*α,*

*ω,*and

*κ*is obtained iteratively, a good initial approximate solution is desirable. This is obtained by selecting two landmarks (labeled

*n*= 1 and

*n*= 2) and using the small-angle approximation. From (10),

*F,*

*α,*

*ω,*and

*κ*are the roots of the nonlinear system of four equations:

*x*

^{″}

_{P}

*z*

^{″}

_{P}

*ϕ*

_{n}−

*α*| , |

*θ*

_{n}| , |

*ω*| , |

*κ*| , | (

*x*

^{″}

_{n}

*x*

^{″}

_{P}

*F*| and | (

*z*

^{″}

_{n}

*z*

^{″}

_{P}

*F*| are all much less than one results in the second-order equations,

*ϕ*≡ (

*ϕ*

_{2}−

*ϕ*

_{1}), etc. If either | (

*x*

^{″}

_{n}

*x*

^{″}

_{P}

*F*| ≪ 1 or | (

*z*

^{″}

_{n}

*z*

^{″}

_{P}

*F*| ≪ 1 is false, the solutions are good only to first order.

*ϕ*

_{n},

*θ*

_{n}) is the measured angular position of the

*n*th landmark, (

*ϕ̂*

_{n},

*θ̂*

_{n}) is the location of the

*n*th landmark predicted by (11) as a function of

*F,*

*α,*

*ω,*and

*κ,*and (13b) is used as a first guess for

*F,*

*α,*

*ω,*and

*κ.*The squared angular distance between the actual and predicted position of the

*n*th landmark is cos

^{2}

*θ*

_{n}(

*ϕ*

_{n}−

*ϕ̃*

_{n})

^{2}+ (

*θ*

_{n}−

*θ̃*

_{n})

^{2}. Therefore,

*E*is the root-mean-square error (rmse) in angular distance in the usual case when all the landmarks have low elevation angles (cos

^{2}

*θ*

_{n}≈ 1 for all

*n*). The method requires formulas for the derivatives ∂

*E*

^{2}/∂

*F,*∂

*E*

^{2}/∂

*α,*∂

*E*

^{2}/∂

*ω,*and ∂

*E*

^{2}/∂

*κ.*These are obtained by differentiating (14) and (11a).

What happens if the principal point is unknown, owing to the photograph being cropped or other causes? Do three or more landmarks provide enough information to determine the six unknowns: *F,* *α,* *ω,* *κ,* *x*^{″}_{p}*z*^{″}_{p}*x*″ and *z*″) across the photograph and locating the principal point is virtually impossible for the following reason. It is clear from the appendix that (13a) for *n* = 1, 2, … , *N* is a linear system in only the four independent variables, 1/*F,* *κ,* *α* + *ωκ* − *x*^{″}_{p}*F,* and *ω* − *ακ* − *z*^{″}_{p}*F,* regardless of the number of landmarks *N,* and hence the best we can do is to determine *F,* *α,* *ω,* and *κ* in terms of *x*^{″}_{P}*z*^{″}_{P}*F,* *α,* *ω,* and *κ* are highly insensitive to the location of the principal point and we can set *x*^{″}_{P}*z*^{″}_{P}*x*^{″}_{p}*F* and *z*^{″}_{p}*F* are compensated by equal changes in *α* + *ωκ* and *ω* − *ακ,* respectively.

Difficulties in determining the principal point in a wide-angle photograph may be anticipated since the grid is locally Cartesian in the neighborhood of the principal point. Our fears were confirmed by the results of simulated tests (section 4) in which the above method for finding the principal point generally converged to the wrong point. Therefore, in scaling wide-angle photographs, the “crossed-diagonals technique” (H82) should be used on the original negative or slide in order to obtain a good estimate of the location of the principal point in the image. For example, the principal point of the Canon 16 MS 16-mm movie camera used during VORTEX is within (±0.2 mm, ±0.3 mm) of the center of the 10.37 mm × 7.52 mm image frame according to specifications obtained from the manufacturer.

## 4. Tests of the algorithm

With telephoto images or in situations in which only a feature in the central part of the image is of interest, the H82 method is found to be acceptable. In other words, a scaled Cartesian grid can be placed on the image, and rotated so that its horizontal axis is parallel to the visible horizon, which is assumed to be at 0° elevation. This has been done in Fig. 3 with an image of the Dimmitt, Texas, tornado of 2 June 1995 recorded in Super-VHS video format during a VORTEX intercept. The overlay was created using drawing software, and scaled, translated, and rotated to obtain a subjective best fit to the four survey landmarks. This ∼10.5° image is typical of the sort of telephoto image in which the small-angle approximation does not lead to significant errors (i.e., a Cartesian grid can be used for scaling).

This photograph (video image capture) also has been analyzed by the algorithm (Fig. 4). The resulting (slightly non-Cartesian) grid is generated in a matter of a few seconds at most, and is similar to that obtained by the more labor-intensive subjective method (Fig. 3). The grid fits the landmark positions with an rmse of 0.045°. The small-angle solution based on the two outer landmarks has an rmse that is larger by 20% [or 40% if *κ* is set to zero in (13b)]. Although the H82 method may be accurate enough for telephoto images, the algorithm still has a considerable speed advantage. Once the computer code has been written (a one-time effort), the work in generating the grid consists merely of inputting the azimuths and elevation angles (*ϕ*_{n}, *θ*_{n}) into the program and running it. Moreover, the algorithm computes the magnified focal length *F* (2242 pixels where the pixel dimensions of the image are 416 × 238), principal azimuth *α*(262.67°), camera elevation angle *ω*(1.33°), and roll angle *κ*(0.74°). These parameters are then available for immediately computing via (11a) the azimuth and elevation angles (*ϕ,* *θ*) of any object, given the Cartesian coordinates (*x*″, *z*″) of its image.

A more challenging test is provided by the following simulation of a 35-mm photograph (image dimensions 36 mm × 24 mm) taken with a 28-mm wide-angle lens. It is assumed that the camera is pointed at 270° azimuth, is tilted upward at 15°, and has a roll of 7.5°, and that there are four landmarks at angular positions (*ϕ*_{1}, *θ*_{1}) = (240°, 0.5°), (*ϕ*_{2}, *θ*_{2}) = (290°, 1.0°), (*ϕ*_{3}, *θ*_{3}) = (260°, 2.0°), and (*ϕ*_{4}, *θ*_{4}) = (275°, 0.7°). The principal point is assumed to be at the center of the slide. The corresponding positions of the images of the landmarks on the slide, computed from (10) and then rounded to the nearest 0.1 mm to allow for observational imprecision, are (*x*^{″}_{n}*z*^{″}_{n}*n* = 1, … , 4. Given the above angular positions and coarsened image locations of the landmarks, the algorithm was used to retrieve the camera parameters *F,* *α,* *ω,* and *κ,* and to compute the rmse of the predicted angular positions of the landmarks compared to the actual positions. The small-angle solution based on the two outer landmarks gives (*F,* *α,* *ω,* *κ*) = (30.9 mm, 270.7°, 13.8°, 7.6°) and has an rmse of 1.7°. Using the two inner landmarks instead of the two outer ones provides the more accurate result: (*F,* *α,* *ω,* *κ*) = (29.0 mm, 269.9°, 14.6°, 7.9°) with an rmse of 0.54°. Ignoring the *κ* terms in this case degrades the rmse to 2.4°. The corresponding values for the conjugate-gradient solution, (*F,* *α,* *ω,* *κ*) = (28.03 mm, 270.01°, 15.00°, 7.50°) and rmse = 0.07°, illustrate the superiority of the conjugate–gradient solution.

We did not succeed in finding a reliable method for locating the principal point (PP) in wide-angle photographs. Our attempts consisted of running the algorithm with different assumed positions of the PP to determine rmse as a function of *x*^{″}_{P}*z*^{″}_{P}*x*^{″}_{P}*z*^{″}_{P}*x*^{″}_{P}*z*^{″}_{P}*κ* = 0, the variation in *z*^{″}_{P}*F* between minima seemed to be compensated by a nearly equal change in *ω.* This indicated that the inability to locate the PP might be associated with the low elevation angles of all the landmarks. To test this hypothesis, the experiment was repeated with *θ*_{4} changed from 0.7° to 20°. In this case, there was a global minimum near the PP at (0, 0.5) mm with an rmse of 0.068°. There was also a comparable minimum at (0.5,2.5) mm with an rmse of 0.086°, which introduces uncertainty into the PP determination. When *θ*_{3} also was changed, from 2° to 15°, there was a single minimum at the PP with an rmse of 0.062°. Recall that accurate surveying of high-elevation-angle landmarks is very sensitive to the precision with which the camera position is determined. Since the likelihood of having two landmarks with precise high-elevation angles is small, determination of the PP generally is impossible.

## 5. Application example using triangulation of VORTEX data

One of the uses of photogrammetry in VORTEX data analysis has been to locate tornadoes, wall clouds, and other accessory clouds by triangulation. In many events, numerous photographers, working with VORTEX and independently, obtained still and video images from different viewing directions. The video images were registered in time to the nearest ∼5 s (time registration techniques are beyond the scope of this paper). Time registration of still photos is more problematic, but in the case of tornado cyclones, the cloud features tend to rotate and evolve quickly enough that comparisons with time-stamped video provides time estimates that have an uncertainty of about 15 s in practice. Using two or more simultaneous images from different angles (stereo photogrammetry), the locations of common features in the photos can be triangulated. (In field research programs, whenever practical, camera teams should synchronize their photographs via radio communication.)

The technique of graphical intersection simply involves plotting, on an isometric map projection, straight lines from the camera locations oriented along the azimuths of the feature. The intersection of these lines gives the horizontal coordinates of the common feature. The height of the feature can be computed directly from (16) below using the range and elevation angle of the feature from one of the cameras and the camera's height above mean sea level. The graphical intersection technique is illustrated in Fig. 5 for the Dimmitt, Texas, tornado photographed at ∼0106:30 UTC 2 June 1995. (In this illustration, the two images were not obtained simulataneously, so there is a small error in tornado position compared to accurate trackings that have been made using stereo photogrammetry and mobile Doppler data for a formal study in progress.) The center of the visible tornado near the ground, and the left and right extent of the completely opaque portion of the debris cloud were mapped. Examination of the scaled images as well as the output from the objective scaling and orientation technique indicates that uncertainty in azimuth from orientation and scaling errors is ∼0.1°, while uncertainty owing to the “nebulous” appearance and definition of the features is probably closer to 0.2°. It is typically the case that more uncertainty accrues from the identification of a cloud feature than from image measurement, scaling, and orientation errors. At the ranges of 6–10 km, an uncertainty in azimuth of 0.3° equates to an uncertainty in position of about 30–50 m.

*ϕ*

_{PT},

*θ*

_{PT}) in the image of camera P. We wish first to find the angular position (

*ϕ*

_{CT},

*θ*

_{CT}) of the same feature in a simulataneous image taken by another camera C at an azimuth

*β*and a horizontal distance

*S*from P, and then find the location in space of T. By triangulation and the sine rule (Fig. 6), the horizontal ranges from P and C to T are

*Z** =

*Z*

^{*}

_{C}

*R*

_{CT}sin

*θ*

_{CT}=

*Z*

^{*}

_{P}

*R*

_{PT}sin

*θ*

_{PT}. If there are several features in C's image that might correspond to the target in P's image, then the automated technique is used as follows. The ray commencing at camera P and passing through the target is defined parametrically by

*x*

^{″}

_{C}

*z*

^{″}

_{C}

*R*

_{PT}and allows us to project the ray from P onto C's image with points along the ray labeled by values of

*R*

_{PT}. Narrowing the search to a narrow strip (allowing for experimental error) is usually sufficient to identify T in C's image and hence to deduce

*R*

_{PT}. Then the angular position (

*ϕ*

_{CT},

*θ*

_{CT}) of T in the image of C can be obtained from (11a) and the now-known

*x*

^{″}

_{C}

*z*

^{″}

_{C}

*R*

_{CT}can be determined from (15).

This procedure is illustrated by another example from VORTEX. Figure 6 depicts the camera positions and geometry for this example. A target cloud feature in a photograph from camera P is chosen (i.e., the diamond near 267°, 2° in Fig. 7a). The coordinates of points along the ray are computed from (16), and the ray is then projected into the image of a second camera, C, using (17) and (18). The projected points in C's image are labeled with the pertinent values of *R*_{CT} (see Fig. 7b). The analyst then chooses the location along the ray of points that coincides with the same cloud feature of the target image. In this case, the lowered portion of the distant cloud base to the right of the tornado at roughly 15 750-m range is chosen as the solution. This technique has the attractive feature of allowing estimates of uncertainty based on how close the solution ray passes to target features. In the example, it appears that the uncertainty is perhaps around 0.5°. Hence, this cloud feature is near azimuth 250.7°, elevation 0.8°, and range 15 750 m from camera C. A correction term for earth curvature and atmospheric refraction, +6.76 × 10^{−8} *R*^{2} cos^{2}*θ* (H82), is used in the height computations. This correction is +3 m at P and +16 m at C. The target height is roughly 1400 m MSL from Fig. 6. [The close agreement between the calculations using P's and C's data is probably fortuitous.]

This pair of images was chosen because of their image clarity in publication. In reality, they were not obtained at the same instant (as evidenced by the differing morphology of the tornado funnel), although the trailing low cloud feature was evolving slowly enough that this measurement might not be too much in error. Another pathology of this image pair is that the two cameras were quite close to collinear in orientation near the time these images were obtained, which can lead to serious errors in practice: stereo photogrammetry, like dual-Doppler analysis, relies on adequate separation in the angle of view.

It is not strictly necessary to identify distinctive targets. For example, it is often sufficient to choose a more amorphous target, such as the edge of a cloud base. The analyst simply chooses the point in the solution image where the ray passes through the cloud base (being cognizant of perspective), irrespective of whether a distinctive feature can be noted on the cloud base. This approach is the one utilized in the foregoing discussion of the actual photographs.

## 6. Conclusions

After deducing the camera site accurately by lining up foreground and background landmarks in the imagery with those seen in person and then carefully measuring the azimuth and elevation angles of the landmarks, it is possible to perform accurate photogrammetry on images with unrecorded focal length and camera orientation angles. We have developed an algorithm (section 3) that rapidly deduces the focal length, azimuth, and tilt of the optical axis, and the roll angle of the camera, and then computes the azimuth and elevation angles of any feature in the image. Assumptions that the roll is negligible, that the visible horizon is the lens horizon, and that angles are small are unnecessary and potential sources of error. Although the small-angle solution derived in section 3 and the appendix might be sufficiently accurate for telephoto images, there is no advantage to using it over the more general and accurate algorithm if the time to develop the computer code has been invested previously. The method accommodates the scale distortion inherent in wide-angle photographs (e.g., the distortion visible in the wide-angle photograph of Fig. 5). Optical effects, such as pincushion and barrel distortion, are not accommodated here, but are treatable using techniques readily found in Web searches. Results are insensitive to the exact position of the principal point for telephoto images. For wide-angle photography, the principal point can be determined only if there is a sufficient number of precisely measured landmarks with diverse azimuth and elevation angles. If all the landmarks have low elevation angles, the PP is impossible to determine and must be assumed to lie at the intersection of the diagonals of the uncropped image.

A photogrammetric search technique is described for finding an entity, which is visible in one camera's photography, in a simultaneous image obtained from a different direction by a second camera. Once the same object has been identified in both images, its 3D position is determined by triangulation. This method could be used to superpose visible cloud features onto fields of reflectivity and Doppler velocity observed by mobile Doppler radar.

## Acknowledgments

This work was supported under Grants ATM-9617318 and ATM-0003869 from the National Science Foundation. Additional support was provided by the National Severe Storms Laboratory. We gratefully acknowledge very thorough and helpful reviews of the anonymous reviewers. Computer routines in the IDL programming language can be obtained from the corresponding author to perform some of the photogrammetric functions described in this paper.

## REFERENCES

Bluestein, H. B., and Golden J. H. , 1993: A review of tornado observations.

*The Tornado: Its Structure, Dynamics, Prediction, and Hazards, Geophys. Monogr.,*No. 79, Amer. Geophys. Union, 319–352.Golden, J. H., and Purcell D. , 1978: Airflow characteristics around the Union City tornado.

,*Mon. Wea. Rev.***106****,**22–28.Hoecker, W. H., 1960: Wind speed and airflow patterns in the Dallas tornado of April 2, 1957.

,*Mon. Wea. Rev.***88****,**167–180.Holle, R. L., 1982: Photogrammetry of thunderstorms.

*Thunderstorms: A Social and Technological Documentary,*E. Kessler, Ed., University of Oklahoma Press, 77–98.Press, W. H., Flannery B. P. , Teukolsky S. A. , and Vetterling W. T. , 1986:

*Numerical Recipes: The Art of Scientific Computing*. Cambridge University Press, 818 pp.Rasmussen, E. N., Straka J. M. , Davies-Jones R. , Doswell C. A. III, Carr F. H. , Eilts M. D. , and MacGorman D. R. , 1994: Verification of the Origins of Rotation in Tornadoes Experiment: VORTEX.

,*Bull. Amer. Meteor. Soc.***75****,**995–1006.Saunders, P. M., 1963: Simple sky photogrammetry.

,*Weather***18****,**8–11.Slama, C. C., Ed.,. 1980:

*Manual of Photogrammetry.*4th ed. American Society of Photogrammetry, 1056 pp.Wakimoto, R. M., and Bringi V. M. , 1988: Dual-polarization observations of microbursts associated with intense convection: The 20 July storm during the MIST project.

,*Mon. Wea. Rev.***116****,**1521–1539.Wakimoto, R. M., and Martner B. E. , 1992: Observations of a Colorado tornado. Part II: Combined photogrammetric and Doppler radar analysis.

,*Mon. Wea. Rev.***120****,**522–543.Wakimoto, R. M., Kessinger C. J. , and Kingsmill D. E. , 1994: Kinematic, thermodynamic, and visual structure of low-reflectivity microbursts.

,*Mon. Wea. Rev.***122****,**72–92.

## APPENDIX

### Small-Angle Solution for a Known or Assumed Principal Point

*F*

^{−1},

*κ,*

*ωκ*+

*α,*and

*ω*−

*ακ*(or

*F*

^{−1},

*κ,*

*ωκ*+

*α*−

*x*

^{″}

_{P}

*F*

^{−1}, and

*ω*−

*ακ*−

*z*

^{″}

_{P}

*F*

^{−1}if

*x*

^{″}

_{P}

*z*

^{″}

_{P}

_{1}− ( )

_{2}. Subtracting (A2) from (A1) and (A4) from (A3) results in a linear system in two of the unknowns, namely

*ϕ*Δ

*x*″ + Δ

*θ*Δ

*z*″ ≠ 0. With

*F*

^{−1}and

*κ*now known, we rewrite (A3) and (A1) as a linear system in the unknowns

*ω*and

*α,*

Schematic of a photograph taken with a camera at a roll angle *κ.* The intersection C of the diagonals (dashed) approximately locates the principal point P (the separation between C and P is exaggerated for clarity). The *x*′ and *z*′ axes are horizontal and vertical; the *x*″ and *z*″ axes are parallel to the long and short edges of the photograph

Citation: Journal of Atmospheric and Oceanic Technology 20, 12; 10.1175/1520-0426(2003)020<1790:TPOWIA>2.0.CO;2

Schematic of a photograph taken with a camera at a roll angle *κ.* The intersection C of the diagonals (dashed) approximately locates the principal point P (the separation between C and P is exaggerated for clarity). The *x*′ and *z*′ axes are horizontal and vertical; the *x*″ and *z*″ axes are parallel to the long and short edges of the photograph

Citation: Journal of Atmospheric and Oceanic Technology 20, 12; 10.1175/1520-0426(2003)020<1790:TPOWIA>2.0.CO;2

Schematic of a photograph taken with a camera at a roll angle *κ.* The intersection C of the diagonals (dashed) approximately locates the principal point P (the separation between C and P is exaggerated for clarity). The *x*′ and *z*′ axes are horizontal and vertical; the *x*″ and *z*″ axes are parallel to the long and short edges of the photograph

Citation: Journal of Atmospheric and Oceanic Technology 20, 12; 10.1175/1520-0426(2003)020<1790:TPOWIA>2.0.CO;2

Image of the Dimmitt, TX, tornado of 2 Jun 1995 obtained by the “CAM-1” VORTEX intercept team using Super-VHS video. The four survey landmarks are marked with arrowheads near the survey point, and azimuth and elevation notated in decimal degrees. From left to right, the landmarks are a faint power pole, the left roof peak of the barn, a power pole to the right of the barn, and the top of the second power pole from the image edge. The Cartesian grid overlay was generated in graphics drawing software and magnified, translated, and rotated to find a subjective best fit with the survey data

Image of the Dimmitt, TX, tornado of 2 Jun 1995 obtained by the “CAM-1” VORTEX intercept team using Super-VHS video. The four survey landmarks are marked with arrowheads near the survey point, and azimuth and elevation notated in decimal degrees. From left to right, the landmarks are a faint power pole, the left roof peak of the barn, a power pole to the right of the barn, and the top of the second power pole from the image edge. The Cartesian grid overlay was generated in graphics drawing software and magnified, translated, and rotated to find a subjective best fit with the survey data

Image of the Dimmitt, TX, tornado of 2 Jun 1995 obtained by the “CAM-1” VORTEX intercept team using Super-VHS video. The four survey landmarks are marked with arrowheads near the survey point, and azimuth and elevation notated in decimal degrees. From left to right, the landmarks are a faint power pole, the left roof peak of the barn, a power pole to the right of the barn, and the top of the second power pole from the image edge. The Cartesian grid overlay was generated in graphics drawing software and magnified, translated, and rotated to find a subjective best fit with the survey data

As in Fig. 3 but the principal vertical and horizontal lines are solid white with a gap at the principal point (at image center), and the overlay is the objectively computed image scaling and orientation. Lines of constant azimuth and elevation are dash-dotted at 5° intervals, and dotted at 1° intervals. This and subsequent images are the output of an IDL image analysis program

As in Fig. 3 but the principal vertical and horizontal lines are solid white with a gap at the principal point (at image center), and the overlay is the objectively computed image scaling and orientation. Lines of constant azimuth and elevation are dash-dotted at 5° intervals, and dotted at 1° intervals. This and subsequent images are the output of an IDL image analysis program

As in Fig. 3 but the principal vertical and horizontal lines are solid white with a gap at the principal point (at image center), and the overlay is the objectively computed image scaling and orientation. Lines of constant azimuth and elevation are dash-dotted at 5° intervals, and dotted at 1° intervals. This and subsequent images are the output of an IDL image analysis program

Example of graphical intersection technique. The top image is from a 35-mm still photograph obtained by the “PROBE-4” VORTEX team located at “P4” on Texas Highway 194 southeast of Dimmitt. The bottom image is a digitized frame of Super-VHS video from the CAM-1 team located on Texas Highway 86 east of Dimmitt. The images were objectively scaled and oriented using the method described in the text. The map shows lines plotted from the camera sites along the measured azimuths of the center of the tornado near the ground, as well as the left and right extent of the opaque portion of the debris cloud. The circle represents the ∼350 m diameter location of this opaque debris cloud

Example of graphical intersection technique. The top image is from a 35-mm still photograph obtained by the “PROBE-4” VORTEX team located at “P4” on Texas Highway 194 southeast of Dimmitt. The bottom image is a digitized frame of Super-VHS video from the CAM-1 team located on Texas Highway 86 east of Dimmitt. The images were objectively scaled and oriented using the method described in the text. The map shows lines plotted from the camera sites along the measured azimuths of the center of the tornado near the ground, as well as the left and right extent of the opaque portion of the debris cloud. The circle represents the ∼350 m diameter location of this opaque debris cloud

Example of graphical intersection technique. The top image is from a 35-mm still photograph obtained by the “PROBE-4” VORTEX team located at “P4” on Texas Highway 194 southeast of Dimmitt. The bottom image is a digitized frame of Super-VHS video from the CAM-1 team located on Texas Highway 86 east of Dimmitt. The images were objectively scaled and oriented using the method described in the text. The map shows lines plotted from the camera sites along the measured azimuths of the center of the tornado near the ground, as well as the left and right extent of the opaque portion of the debris cloud. The circle represents the ∼350 m diameter location of this opaque debris cloud

The geometrical relationship in two vertical planes and in projection onto a horizontal plane between a target T and two cameras C and P. The numerical values represent the solution of the problem depicted in Fig. 7

The geometrical relationship in two vertical planes and in projection onto a horizontal plane between a target T and two cameras C and P. The numerical values represent the solution of the problem depicted in Fig. 7

The geometrical relationship in two vertical planes and in projection onto a horizontal plane between a target T and two cameras C and P. The numerical values represent the solution of the problem depicted in Fig. 7

Example of automated graphical intersection for a full 3D target location solution. In the top image (obtained by I. Wittmeyer of the VORTEX PROBE-4 team) a target has been identified on the lower edge of the cloud base trailing the tornado (black box containing white diamond). In the lower image, obtained by the VORTEX CAM-1 team, the ray from the PROBE-4 camera through the target is traced. Symbols are marked with range from the CAM-1 camera in tens of meters. The likely location of the target along the ray is between the 14 990- and 15 940-m range marks

Example of automated graphical intersection for a full 3D target location solution. In the top image (obtained by I. Wittmeyer of the VORTEX PROBE-4 team) a target has been identified on the lower edge of the cloud base trailing the tornado (black box containing white diamond). In the lower image, obtained by the VORTEX CAM-1 team, the ray from the PROBE-4 camera through the target is traced. Symbols are marked with range from the CAM-1 camera in tens of meters. The likely location of the target along the ray is between the 14 990- and 15 940-m range marks

Example of automated graphical intersection for a full 3D target location solution. In the top image (obtained by I. Wittmeyer of the VORTEX PROBE-4 team) a target has been identified on the lower edge of the cloud base trailing the tornado (black box containing white diamond). In the lower image, obtained by the VORTEX CAM-1 team, the ray from the PROBE-4 camera through the target is traced. Symbols are marked with range from the CAM-1 camera in tens of meters. The likely location of the target along the ray is between the 14 990- and 15 940-m range marks

Definitions of common photogrammetry terms

Definitions of symbols used in text

^{1}

Photogrammetry in general is much more often concerned with measurements in airborne photographs obtained at near-vertical incidence. Terrestrial photogrammetry is concerned with measurements in photographs obtained from the ground at incidence close to horizontal.