• Bellon, A., and I. Zawadzki, 1994: Forecasting of hourly accumulations of precipitation by optimal extrapolation of radar maps. J. Hydrol.,157, 211–233.

    • Crossref
    • Export Citation
  • Browning, K. A., 1979: The FRONTIERS plan: A strategy for using radar and satellite imagery for very-short-range precipitation forecasting. Meteor. Mag.,108, 161–184.

  • ——, C. G. Collier, P. R. Larke, P. Menmuir, G. A. Monk, and R. G. Owens, 1982: On the forecasting of frontal rain using a weather radar network. Mon. Wea. Rev.,110, 534–552.

    • Crossref
    • Export Citation
  • Cooley, J. W., and J. W. Tukey, 1965: An algorithm for the machine calculation of complex Fourier series. Math. Comput.,19, 297–301.

    • Crossref
    • Export Citation
  • Frigo, M., and S. Johnson, 1998: FFTW: An adaptive software architecture for the FFT. Proc. Int. Conf. on Acoustics, Speech and Signal Processing, Seattle, WA, IEEE Signal Processing Society, 1381–1384.

  • Kreyszig, E., 1983: Advanced Engineering Mathematics. John Wiley and Sons.

  • Wolfson, M., B. E. Forman, R. G. Hallowell, and M. P. Moore, 1999:The growth and decay storm tracker. Preprints, Eighth Conf. on Aviation, Range, and Aerospace Meteorology, Dallas, TX, Amer. Meteor. Soc., 58–62.

  • View in gallery

    Figure from Wolfson et al. (1999). The 5 × 21 pixel elliptical filter is shown at 2 of the 18 possible orientations. The region of support for the filter is shown by the cross-hatched pixels.

  • View in gallery

    Original radar image before large-scale filtering. This is the lowest elevation of a volume scan by the Weather Service Radar, KFWS, in Fort Worth, TX, on 8 May 1995. The polar data from the radar were mapped into a Cartesian grid with each pixel being approximately 1 km × 1 km and clipped at a range of 256 km from the radar.

  • View in gallery

    The effect of filtering the radar image in Fig. 2 using the large-scale filtering technique described in Wolfson et al. (1999). A 5 × 21 filter was used.

  • View in gallery

    The effect of filtering the radar image in Fig. 2 using the faster large-scale filtering technique described in this paper. Compare with the result of the original filtering technique (Fig. 3). A 5 × 21 filter was used. The absolute difference between this image and Fig. 3 is shown in Fig. 5.

  • View in gallery

    The absolute difference between the results of filtering the radar image in Fig. 2 using the faster large-scale filtering technique described in this paper and using the original filtering technique (Fig. 3). A 5 × 21 filter was used in both cases. The reasons for the difference in the outputs are explained in section 2f.

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 109 109 0
PDF Downloads 2 2 0

Speeding Up a Large-Scale Filter

View More View Less
  • 1 National Severe Storms Laboratory, and University of Oklahoma, Norman, Oklahoma
© Get Permissions
Full access

Abstract

Wolfson et al. introduced a storm tracking algorithm, called the Growth and Decay Storm Tracker, in which the large-scale features were extracted from radar data fields by using an elliptical filter. The elliptical filter as introduced was computationally too expensive to be performed in real time.

In this paper, it is shown that although the elliptical filter is nonlinear, it can be decomposed into two parts, one of which is, under some simplifying assumptions, linear and shift-invariant. The linear component can be accelerated using fast algorithms available to compute the digital Fourier transform (DFT). Furthermore, it is shown that the nonlinear part can be written as an update equation, thus reducing the amount of computer memory required. With these improvements to the basic large-scale filtering technique, this paper reports that the large-scale filtering can be done 1–2 orders of magnitude faster.

The improvement makes it possible to use the large-scale filtering technique in situations where the computational time and memory requirements have been prohibitive.

Corresponding author address: V. Lakshmanan, U.S. Department of Commerce, National Severe Storms Laboratory, Stormscale Research and Applications Division, 1313 Halley Circle, Norman, OK 73069.

Email: lakshman@nssl.noaa.gov

Abstract

Wolfson et al. introduced a storm tracking algorithm, called the Growth and Decay Storm Tracker, in which the large-scale features were extracted from radar data fields by using an elliptical filter. The elliptical filter as introduced was computationally too expensive to be performed in real time.

In this paper, it is shown that although the elliptical filter is nonlinear, it can be decomposed into two parts, one of which is, under some simplifying assumptions, linear and shift-invariant. The linear component can be accelerated using fast algorithms available to compute the digital Fourier transform (DFT). Furthermore, it is shown that the nonlinear part can be written as an update equation, thus reducing the amount of computer memory required. With these improvements to the basic large-scale filtering technique, this paper reports that the large-scale filtering can be done 1–2 orders of magnitude faster.

The improvement makes it possible to use the large-scale filtering technique in situations where the computational time and memory requirements have been prohibitive.

Corresponding author address: V. Lakshmanan, U.S. Department of Commerce, National Severe Storms Laboratory, Stormscale Research and Applications Division, 1313 Halley Circle, Norman, OK 73069.

Email: lakshman@nssl.noaa.gov

1. Introduction

It has been shown (Browning et al. 1982) that large-scale radar signatures (features such as mesoscale precipitation areas) are more predictable than smaller-scale ones (features such as individual convective rain echoes). Therefore, extracting the larger-scale features from radar images of storms has been extensively studied. One way of extracting features of arbitrary scales from images is to convolve the images with appropriate filters. Authors (e.g., Browning 1979; Bellon and Zawadzki 1994) have traditionally used filters where the region of support is isotropic. However, storms are often organized such that they are several times longer than they are wide. Hence, a filter that accounts for this elongation, having a support area that is elongated along the front direction would be expected to perform better in extracting large-scale signatures.

Wolfson et al. (1999) used a filter where the region of support was an ellipse with the major axis of the ellipse about four times longer than the minor axis.1 Since the direction of the front was not known a priori, several filters with the ellipse at different orientations were used and the filter that yielded the maximum response at a particular location was assumed to be the one aligned with the front direction at that location.

Weather radar commonly used in the United States provide resolution of about 1 km per pixel2 and a range of more than 250 km at the lowest elevations. A weather radar makes a new volume scan every 300 s on average and a new elevation scan every 30 s on average. Thus, filtering commonly needs to be done for volume products in under 300 s and for elevation products in under 30 s. For any filtering technique to be effectively used in a near-real-time environment, the filtering will have to meet these time criteria.

The filtering of a time sequence by a filter window can be achieved by multiplying the Fourier transform of the time sequence with the the Fourier transform of the filter. Since there are fast algorithms available to compute the digital Fourier transform (DFT) of sequences whose lengths are composites of small prime numbers, significant speedups can be realized by using this approach.

2. Methods

a. Large-scale filtering

In their paper on the Growth and Decay Storm Tracker, Wolfson et al. (1999) described the large-scale filtering technique in some detail. The filtering process, which is applied to any gridded weather radar data field, is as follows.

  1. The center point of the two-dimensional filter (which is an ellipse rotated in 10° increments; see Fig. 1) is placed over each point in the data field to be filtered. The average of all the image points that lie under cross-hatched pixels in the filter (see Fig. 1) is computed.
  2. The filter is then rotated by 10° (through 180°) and the average value with this filter orientation is recomputed.
  3. The maximum of the average values obtained at each of the filter orientations is set as the filtered value at that data point.
  4. Steps 1–3 are repeated for every pixel in the image.

This paper describes an equivalent implementation of the above process and describes how that implementation can be made faster.

b. Modified technique

The algorithm described in Wolfson et al. (1999) can be done equivalently as follows.

  1. Compute the weighted average at each pixel of the original image where the weights are nonzero only where the elliptical filters (see Fig. 1) have cross-hatched pixels.
  2. Repeat step 1 for every orientation of the elliptical filters.
  3. Set each pixel of the final image to be the maximum value at the same pixel location (same row and same column) of the filtered images at each orientation.
The main difference between Wolfson et al. (1999) and the operations listed above is the order in which the operations are performed. The maximum is computed after the entire image has been filtered by the elliptical filter rather than after each pixel of the output has been computed. Sections 2c and 2d simply show that the algorithm described above is equivalent to the algorithm described in section 2a. Under some simplifying conditions, it is shown that step 1 can be sped up using fast Fourier transforms (FFTs).

c. The original technique

It will be useful to represent the process described in Wolfson et al. (1999) more formally. The original gridded data field is represented as D and the filtered grid as F. We will also use the notation that Di,j is the grid value at the ith row and jth column. Then, the elliptical filtering process can be represented as
i1520-0426-17-4-468-e1
where E0, E1, . . . , Eq−1 are q elliptical filter orientations and fil(D, E) is the average value of the points in D underneath the filter E. The symbol ∨ denotes the maximum operator. It is readily apparent that the filtering technique in Eq. (1) is nonlinear.3
Let us make each orientation of the elliptical filter a gridded array with 1s at the cross-hatched pixels in Fig. 1 and 0s everywhere else. We will then get a (2p + 1) × (2p + 1) grid, where 2p + 1 is the major axis dimension of the ellipse (i.e., the grid will be 64 × 64 if the ellipses have a major axis of 64 pixels and a minor axis of 15 pixels). Let us also denote by Nk the number of points that are 1 in the kth orientation, Ek, of the filter.4 Then, the averaging operation, fil, can be written as
i1520-0426-17-4-468-e2
Recognizing that summation is a linear operation and that Ekm,n is a constant, we see that the combination of multiply and add in Eq. (2) is a linear operation on D.

Let us assume that we use grids of size N × N and elliptical filters of size (2p + 1) × (2p + 1). Then, the number of operations required to compute each filter point value is (2p + 1)2. This has to be done for each orientation of the filter. Since there are q filter orientations, one has to perform q(2p + 1)2 operations for each Fi,j, which in turn has to be computed for each pixel in the image. Thus, the total computational overhead for the algorithm as described in Wolfson et al. (1999) is of the order of q(N2p)2.

For a 512 × 512 image and a 15 × 64 elliptical filter in 10° increments, we have N = 512, p = 32, and q = 18. The computational overhead is on the order of 19 billion operations.

d. Rearranging the operations

Rewriting Eq. (1) with the expansion of the filtered form and explicitly noting the iteration over average values for the max operation, we obtain
i1520-0426-17-4-468-e3
What we want is the filtered grid value at every point in F. So, F is the set of all the filtered grid values:
FFi,j
Let us now denote by the result of the linear operation,
i1520-0426-17-4-468-e5
Recognizing that the ’s do not depend on each other, the max operation can be moved outside the braces, that is, the maximum will be done after we have obtained all the pixels of the filtered image Fk. Then, the entire image F can be written as
i1520-0426-17-4-468-e6
where Fk is the image formed from the grid values, .

e. Speedup

Readers may recognize Eq. (5) (or step 1 of the modified algorithm in section 2b) as the two-dimensional, digital counterpart of the convolution operation. A well-known property of the convolution of two functions, f(t) and g(t), is that it is the inverse Fourier transform of the product of the Fourier transforms of the two signals:
fgtF−1FftFgt
where F[f(t)] is the Fourier transform of f(t). [For an explanation of this theorem, consult any introductory engineering mathematics text, such as Kreyszig (1983).] Denoting the digital Fourier transform of an image X by X, we can rewrite Eq. (5) as
FEDr,s
where the new indices represent the idea that we are operating on discrete spatial frequencies. Comparing this with Eq. (6), we see that we can obtain the filtered image F as follows:
i1520-0426-17-4-468-e9
Note that the indices are back to i, j since the inverse transform puts us back into spatial coordinates. The two Fourier transforms (Ek and D) must be the same size if we are to do an element-by-element multiplication. So, one will have to pad the elliptical filters with zeros until they are the same size as the image.

Equation (9) is the entire filtering process as described below.

  1. Do once: Compute the elliptical filters (1s in the cross-hatched areas in Fig. 1 and 0 everywhere else) and compute the Nk, the number of 1s in the orientation. Divide the filters by Nk’s. Then, pad the filters with zeros until they are of size N × N. Compute the DFTs, Ek’s of the elliptical filters.
  2. Compute the DFT, D, of the N × N image, D, to be filtered.
  3. Do q times: For each orientation of the filter, obtain the element-by-element product of the DFTs: Ek · D. Then, take the inverse DFT of the product.
  4. Assign to each point in the filtered image the maxima of the results from each orientation’s image at that point.

Fast algorithms exist to compute the DFT for multiples of small prime numbers.5 Even if the original image is not of a convenient size, it can be padded with zeros to make it of such a size. These algorithms typically take of the order of N logN operations to compute the DFT of a sequence of length N.

Since the images are of size N × N, the computational overhead of step 2 in our technique is (N2) log2(N2). In step 3, we need to do N2 multiplications and take an inverse DFT, making the overhead N2 + N2 log2(N2). Since step 3 has to be done q times, the total computational overhead of the modified technique is only (3q + 2)N2 log2(N). As in section 2c, we do not count the maximum operation in our computational overhead calculation. Note that our computational overhead does not depend on 2p + 1, the size of the filter. This is because we pad our filters to match the size of the images.

Again using a 512 × 512 image and a 15 × 64 elliptical filter in 10° increments, we have N = 512, p = 32, and q = 18. The computational overhead of the modified technique is of the order of 132 million operations. Recall that the original technique took on the order of 19 billion operations. We get an improvement of more than two orders of magnitude. In practice, we will not achieve such a high improvement because the original technique can be implemented with integer arithmetic, but the DFT technique requires floating point operations, which are slower on most computers.

Comparing the orders of magnitude calculations, we expect an improvement of 1.33p2/log2(N).6 For a 512 × 512 image with a filter size of 15 × 64, the improvement should, in theory, be about 150 times. In practice (as can be seen from Table 1), we can do the filtering about 50 times faster by using the technique described in this paper. It is also worth noting that for small filter sizes (p less than 5), the technique actually degrades performance.

The idea behind the speedup in this paper, that convolution can be performed faster on computers using Fourier transforms, is more than 30 years old; Cooley and Tukey (1965) introduced the fast algorithm that forms the basis for most FFT algorithms today [including the one we used, by Frigo and Johnson (1998)]. By 1967, there was a special issue of the IEEE Transactions on Audio and Electroacoustics devoted to fast Fourier transform methods.

f. Simplifying assumptions

Note that in Eq. (2) and we assumed that Di+m,j+n is always valid. However, there are two instances where Di+m,j+n will not be valid.

The location i + m, j + n may not be inside the grid. When the filtering is done close to the grid boundary, close to the edge of the image, not all the points under the elliptical filter will exist; the average should be computed on only those points that exist. However, taking this boundary effect into account will cause a nonlinearity since the divisor, Nk, is not a constant after all.

What happened to this assumption on Di+m,j+n? To get to the discrete Fourier transform in Eq. (9), we assumed that the data values are periodic beyond the image edge. This is patently nonsense; storms do not repeat every 250 km just for mathematical convenience. We will see, therefore, that at the boundary, our assumptions fail and the modified filter produces answers that are totally different from the original filter.

Second, even if i + m, j + n is a valid location, the data value that it contains may not be valid. Wolfson et al. (1999) omit such pixels from the average calculation. However, in that case, the average will have to be computed on different numbers of pixels depending on the location within the image. Consequently, the filter will become shift-variant.

The duality of spatial convolution to Fourier transform multiplication holds only for linear, shift-invariant filters. So, both these simplifying assumptions have real consequences in the large-scale filtering context.

To retain shift invariance, a numerical value has to be assigned to every pixel in the image. After this ad hoc assignment, all pixels are assumed to contain valid information. This will introduce differences in the results that we obtain with the transform method described in this paper and the results that are obtained using the method described by Wolfson et al. (1999).

g. Updating the maximum

The maximum of q numbers can be written as an update equation:
i1520-0426-17-4-468-e10
that is, the maximum of the kth value and the maximum of the first k − 1 values. It is important to realize from the above equation that we do not need to know all the values in advance; that is, we can update the maximum as and when we get a new value. Consequently, we can perform the maximum operation on the images of Eq. (9) one at a time. Also, we do not need to store all k − 1 values to compute the kth value. This reduces computer memory requirements and, in practice, improves the speed of execution.

In section 2h, we use this update method to compute the maximum for both Eqs. (1) and (9). Recall that in the original method, the maximum is the result of matching the filter at a particular pixel value, while in the modified technique, the maximum is the result of matching all the pixels. Consequently, the memory usage of the algorithm at the time of the maximum operation is higher in the modified technique than in Wolfson et al. (1999). Hence, the effect of updating the maximum as each filtered image is available instead of attempting to store 18 filtered images is significant in the modified technique.

h. Performance

The original large-scale filtering technique as well as the modified technique was performed on a set of radar reflectivity images. Each image in the sequence was the lowest elevation scan of successive volume scans by a weather service radar, observed in Fort Worth, Texas, on 8 May 1995. The polar data from the radar were mapped into a Cartesian grid with each pixel being approximately 1 km × 1 km and clipped at a range of 256 km from the radar.

We filtered a set of seven 512 × 512 radar reflectivity images using both techniques on a single-processor Sun Sparc Ultra 10. For a fairer comparison, the original technique was implemented almost completely using integer arithmetic, while the modified technique uses double-precision floating point operations.

3. Results and discussion

One of the images from the sequence of radar images obtained from the lowest elevation scan of the Fort Worth weather radar is shown in Fig. 2. That image, when filtered by the large-scale smoothing technique of Wolfson et al. (1999) is shown in Fig. 3. The output of the modified filtering process on the same image is shown in Fig. 4. For easier comparison, the absolute difference between the two output images is shown in Fig. 5.

Both the effects described in section 2f are observed. The assumption of the periodicity of the image shows up as large errors along the sides of the image. Where radar data are missing (close to the storm envelope and at some places within it), the pixels sport large errors because we need to retain shift invariance to successfully use the transform method.

The time in seconds taken by the original technique and the modified technique to process each image in the sequence of radar images using different filter sizes is shown in Table 1. The larger the filter size, the greater the relative speed up. Note that the modified technique takes longer on the first image. This is because of the filter FFTs that need to be computed. As is apparent from Table 1, one can perform the operation in a fraction of the time taken using the original technique. In addition, since the time for the modified technique does not depend on filter sizes, the time advantage increases as the filter size increases. The increase in time of Wolfson et al.’s (1999) technique as the sequence progresses is an artifact of the storm case chosen: the storms strengthened and there were more valid pixels to process. The transform method is, of course, data independent since we had to assign an ad hoc value to each pixel in the image. If every pixel in the image had corresponded to valid data, the processing would have taken about 740 s using the original technique.

Note in Table 1 that the transform-based large-scale filtering method introduced in this paper is significantly faster. In fact, the time requirements are lower than the 30-s update interval of radar elevation scans. Thus, this method can be used for filtering both volume and elevation products. It is also noticed in Fig. 4 that the resulting image, though not identical to that obtained by the technique of Wolfson et al. (1999), works well in extracting the larger scales. Hence, if one is willing to live with the assumption on the data values that the transform method imposes, the modified large-scale filter technique can be performed in significantly less time. In environments where real-time performance is very critical, the modified large-scale filter is a good choice.

Acknowledgments

The author used the whimsically named, but excellent, “Fastest Fourier Transform in the West” (FFTW) library (Frigo and Johnson 1998) to perform the fast Fourier transform. The work detailed in this paper was partially funded by the National Severe Storms Laboratory, the Federal Aviation Agency, and via contract with WeatherData, Inc. The author thanks Marilyn Wolfson and her group at MIT Lincoln Laboratory for pointing out the inability of the transform method to deal with missing data.

REFERENCES

  • Bellon, A., and I. Zawadzki, 1994: Forecasting of hourly accumulations of precipitation by optimal extrapolation of radar maps. J. Hydrol.,157, 211–233.

    • Crossref
    • Export Citation
  • Browning, K. A., 1979: The FRONTIERS plan: A strategy for using radar and satellite imagery for very-short-range precipitation forecasting. Meteor. Mag.,108, 161–184.

  • ——, C. G. Collier, P. R. Larke, P. Menmuir, G. A. Monk, and R. G. Owens, 1982: On the forecasting of frontal rain using a weather radar network. Mon. Wea. Rev.,110, 534–552.

    • Crossref
    • Export Citation
  • Cooley, J. W., and J. W. Tukey, 1965: An algorithm for the machine calculation of complex Fourier series. Math. Comput.,19, 297–301.

    • Crossref
    • Export Citation
  • Frigo, M., and S. Johnson, 1998: FFTW: An adaptive software architecture for the FFT. Proc. Int. Conf. on Acoustics, Speech and Signal Processing, Seattle, WA, IEEE Signal Processing Society, 1381–1384.

  • Kreyszig, E., 1983: Advanced Engineering Mathematics. John Wiley and Sons.

  • Wolfson, M., B. E. Forman, R. G. Hallowell, and M. P. Moore, 1999:The growth and decay storm tracker. Preprints, Eighth Conf. on Aviation, Range, and Aerospace Meteorology, Dallas, TX, Amer. Meteor. Soc., 58–62.

Fig. 1.
Fig. 1.

Figure from Wolfson et al. (1999). The 5 × 21 pixel elliptical filter is shown at 2 of the 18 possible orientations. The region of support for the filter is shown by the cross-hatched pixels.

Citation: Journal of Atmospheric and Oceanic Technology 17, 4; 10.1175/1520-0426(2000)017<0468:SUALSF>2.0.CO;2

Fig. 2.
Fig. 2.

Original radar image before large-scale filtering. This is the lowest elevation of a volume scan by the Weather Service Radar, KFWS, in Fort Worth, TX, on 8 May 1995. The polar data from the radar were mapped into a Cartesian grid with each pixel being approximately 1 km × 1 km and clipped at a range of 256 km from the radar.

Citation: Journal of Atmospheric and Oceanic Technology 17, 4; 10.1175/1520-0426(2000)017<0468:SUALSF>2.0.CO;2

Fig. 3.
Fig. 3.

The effect of filtering the radar image in Fig. 2 using the large-scale filtering technique described in Wolfson et al. (1999). A 5 × 21 filter was used.

Citation: Journal of Atmospheric and Oceanic Technology 17, 4; 10.1175/1520-0426(2000)017<0468:SUALSF>2.0.CO;2

Fig. 4.
Fig. 4.

The effect of filtering the radar image in Fig. 2 using the faster large-scale filtering technique described in this paper. Compare with the result of the original filtering technique (Fig. 3). A 5 × 21 filter was used. The absolute difference between this image and Fig. 3 is shown in Fig. 5.

Citation: Journal of Atmospheric and Oceanic Technology 17, 4; 10.1175/1520-0426(2000)017<0468:SUALSF>2.0.CO;2

Fig. 5.
Fig. 5.

The absolute difference between the results of filtering the radar image in Fig. 2 using the faster large-scale filtering technique described in this paper and using the original filtering technique (Fig. 3). A 5 × 21 filter was used in both cases. The reasons for the difference in the outputs are explained in section 2f.

Citation: Journal of Atmospheric and Oceanic Technology 17, 4; 10.1175/1520-0426(2000)017<0468:SUALSF>2.0.CO;2

Table 1.

The time (in s) taken to filter each 512 × 512 image in a sequence using the original and modified filter technique for different filter sizes. Note that the modified technique is significantly faster. The modified technique’s time requirement is also independent of filter sizes.

Table 1.
1

This means that the filter is zero everywhere except in an ellipse-shaped region, where it has a constant nonzero value.

2

In the radial direction; in the azimuthal direction, resolution varies with radar range.

3

A system is linear only if the sum of the responses to two inputs x and y is the same as the response to x + y. It is easy to see that the presence of the ∨ operation in the equation makes it nonlinear. If we denote by xk the results of fil using Ek, the response of the equation to a pixel x is ∨ (x1, x2, . . . , xn). One can verify that ∨ (x1, x2, . . . , xn) + ∨ (y1, y2, . . . , yn) is not the same as ∨ (x1 + y1, x2 + y2, . . . , xn + yn). In this discussion, it has been assumed that the operation fil is linear, that is, the result of filtering x + y with Ek is xk + yk. We show in the next paragraph that this is indeed the case.

4

If there were no quantization effect, then Nk would translate to the area of the ellipse and would not be changed by rotation. However, there is a quantization effect and one must keep track of the Nk’s.

5

Although Radix-2 algorithms are usually the ones that are taught in most college courses, the underlying principle of “divide and conquer” is true for any number that can be expressed as a multiple of small primes. Typical FFT implementations support sequence lengths that are a multiple of 2, 3, and 5. For example, a sequence of length 515 has to be padded to a length of 540 before its DFT can be computed efficiently. The data padding when using an FFT implementation based on small primes requires far less padding than would be required by a purely Radix-2 algorithm (which would need padding to a length of 1024 for a sequence of length 515).

6

If the number of filters, q, is large enough, then [q(N2p)2)/((3q + 2)N2 log2(N)] is approximately (4qp2)/[3q log2(N)].

Save