1. Introduction
In a note published in this journal (Makkonen 2006, hereinafter referred to as “the note”) posed the question that in “order-ranked data by which the return periods of specific extreme events are estimated” … “What is the cumulative probability P that should be associated with the sample of rank m?” The note claimed that the estimator P = m/(N + 1) derived by Weibull (1939) should be used exclusively to derive all statistical properties, including mean recurrence interval, or return period, and also the plotting positions on Gumbel axes. The note declared that the improvements in extreme value analysis methods developed since 1939 are “invalid” and that the “many estimates of the weather-related risks should be reevaluated and the related building codes and other related regulations updated.” Publication of the note resulted in a press release by the American Meteorological Society under the title “Natural hazards are more common than statistics indicate.”
If valid, the note should have provoked an immediate and urgent response. The only published response in this journal has been the comments by de Haan (2007) that the note “addresses a technical problem of limited importance” since it “is concerned with the use of Gumbel probability paper” and fitting a Gumbel distribution “can be done much more efficiently by estimating its parameters (e.g., by maximum likelihood methods) than by plotting a line on Gumbel paper.” De Haan did not challenge the validity of the note but was more concerned with the tone and content of the associated press release. One issue raised by de Haan is particularly relevant: “the purpose of extreme value analysis is not discussed in the paper” [emphasis added by this author].
Many parameter-fitting methods that do not require direct estimates of sample probability in their application nevertheless incorporate assumptions into their derivation, and therefore the issue is of more importance than de Haan suggests. In any case, there are situations in which the ability to plot an unbiased sample distribution and fit this distribution to a model distribution using the weighted least mean squares method is useful—in particular, when fitting observations of wind speeds from mixed climates to a composite distribution (e.g., Cook et al. 2003; Cook 2004) or when comparing penultimate distributions with their asymptotes (e.g., Cook and Harris 2004, 2008).
The purpose of this comment is to demonstrate, using rigorous proofs1 and demonstrations, that the unbiased estimators developed since 1939 are valid and provide a significant improvement in accuracy. This cannot be done without directly addressing the role of sampling error.
2. Probability
First, to help to understand the role of sampling error, it is instructive to review the meaning of “random” and the concept of “probability.” Cramer’s view (Cramer 1946, p. 138) is that random cannot be precisely defined and that its sense is best described by reference to real-world examples.
Cramer uses the classic coin-tossing experiment (Cramer 1946, 142–143) to demonstrate that, if “heads” occurs ν times in n tosses of the coin, the frequency ratio ν/n “shows a marked tendency to become more or less constant for large values of n” and that after a long series of repetitions the frequency would approximate to the probability P. Cramer describes this as “the frequency interpretation of the probability P.”
Cramer’s coin-tossing experiment was performed 400 times (Cramer 1946, p. 143), after which the mean frequency ratio is still not 0.5. With the advent of personal computers with built-in pseudorandom number generators and of efficient algorithms to eliminate any residual serial correlation in the pseudorandom numbers (Press et al. 1989), very many trials of more complex random experiments can be performed easily.




Sampling experiments, which are often referred to as “bootstrapping” (Efron and Tibshirani 1993), exactly replicate this frequency interpretation of P (except, for practical reasons, K is large but finite) and are therefore a convenient way to demonstrate sampling theory and the role of sampling error. Table 1 shows the results of a simple sampling experiment: 10 trials, each of N = 10 000 samples, were obtained from a conventional probability generator.2 The frequency was counted of the values below the thresholds 0.1–0.9 in 0.1 increments. It is clear that, although the results are close to the expected frequencies, every trial is slightly different because of the action of random sampling error.
3. Sampling error
Section 2 established that the concept of probability is precise only in the context of an infinite number of trials. When a real-world observation is made, it is done in the expectation that an ideal invariant process has been sampled. Thus all of the properties of this ideal process can be regarded as fixed, precise values: including probability PI, risk of exceedance QI, and recurrence interval TI.
A value associated with any one trial sample is an imprecise estimate, however, and is subject to sampling error. Sampling error ε has two components: 1) a systematic mean component—the bias error
Note also that, in any real-world situation, there will also be uncertainty in the measurement of the value: the observational error. Measurements would typically be expected to be accurate to <1% but, in the case of wind speed, the World Meteorological Organization standard procedure requires the value to be rounded to the nearest knot (∼0.5 m s−1). This requirement results in an observational error εV with standard deviation σ(εV) ≈ 0.15 m s−1. As this value is small in comparison with the sampling error, the sampling error dominates.3 Observational error is not considered in this paper.
4. Return period








5. Order statistics
The analysis of extreme wind speeds deals with sets of samples and operates through their order statistics, that is, their rank in order of value. The convention is to assign a rank m from m = 1 to m = N, in ascending order of value—although descending order is sometimes used when this simplifies expressions (especially those framed in terms of Q). Here the subscript m:N is used to denote the mth ranked value out of N, where m = 1 is the smallest and m = N is the largest.
Figure 1 shows the mean probability
6. Expectations
a. Mean value, or expectation
As shown above, the concept of probability depends on the mean frequency stabilizing over a large number of trials. It follows directly that the best estimate of any random property x is this mean
In this paper, the notation E(x) is used to denote the expectation, that is, the mean value derived from theory, and the notation


b. Expectation of probability and its standard deviation








The standard deviations, σ(Pm:N), for the experiment of Fig. 1 are compared with the expectations from Eq. (10) in Fig. 2. This shows that the 10 000 trials reproduce the expected behavior extremely well.
c. Expectation of risk and its standard deviation




d. Expectation of return period and its standard deviation




Figure 3 shows the recurrence interval
Figure 4 shows the mean return periods for the samples in the experiment of Fig. 1 in comparison with the expectations E(Rm:N) from Eq. (13) and the value associated with the mean risk R(


Figure 5 shows the corresponding standard deviations for the samples in the experiment of Fig. 1 in comparison with the expectations E(Rm:N) from Eq. (15). The match is good where the expectations remain finite.
e. Expectation of the Fisher–Tippet type-1 variate y and its variance








Gumbel {1958, Eq. [6.1.1.(2)]}, however, gives the analytical solution for the highest value: E(yN:N) = γ + ln(N), where γ = 0.577 215 7 is Euler’s constant, which for N = 9 gives E(yN:N) = 2.774. Gringorten’s estimator gives E(yN:N) ≈ 2.759, whereas the bootstrap with n = 10 000 trials gives E(yN:N) ≈ 2.788, showing that both Gringorten (1963) and the numerical approach give excellent estimates.
For the standard deviation σ(ym:N) there is again no easy alternative available to a numerical computer-based approach. The values of E(ym:N) and σ(ym:N) obtained by numerical integration methods form the core of the procedure derived by Harris (1996), giving the plotting positions and fitting weights for a weighted least squares regression.
Figure 6 shows the mean reduced variate
The expectation for a perfect fit in Fig. 6 is a slope of unity and an intercept of 0. The Weibull plotting positions from E(P) give a good estimate of the intercept but overestimate the slope by 18%, and therefore the datum design value of y for P = 0.98 (R = 50) is overestimated by 18%. The plotting positions from E(R) underestimate both slope and intercept, and therefore the datum design value of y for R = 50 is underestimated by 12.5%. The plotting positions of Gringorten (1963) give a very small underestimate of slope and a very small positive intercept, and therefore the datum design value of y is overestimated by only 0.08%. The numerical method of Harris (1996) gives a very small overestimate of slope and intercept, and therefore the datum design value of y is overestimated by 0.65%. The method of Lieblein (1974) gives an almost exact fit, and the datum design value is accurate to 0.01%.
7. Review of the case made in the note
a. “Introduction”
The note (Makkonen 2006) claims in its introduction section that return period is the primary datum parameter for engineering design to resist wind loads. This may have been true 40 years ago, when the first statistically based wind code was published (BSI 1970), but is not the case today. The problem with using return period as the reference datum is that it was interpreted by many engineers as being equivalent to a “design lifetime.” A deliberate change to the description of the primary datum parameter for buildings from R = 50 yr to annual risk of exceedance Q = 0.02 was introduced into many design codes in the 1990s (e.g., BSI 1995) to emphasize that the corresponding risk of failure applied to each and every year the building is exposed to the wind, irrespective of the building’s design lifetime.
Most current codes of practice for buildings, bridges, and other engineering structures now use a reliability-based approach to setting the design wind loads, the design resistance of the structure, and the corresponding safety factors. This is done by ensuring that wind load times γf is less than structural resistance divided by γm, where γf and γm are partial safety factors for load and material strength, respectively, with values set to control the probability of failure (expressed as a “reliability index” β). From this, it can be seen that the design wind load for the datum risk is not the sole parameter; it is the upper tail of the distribution that sets the reliability index β that is important. Hence the emphasis in assessing wind speed observations should not be focused on getting the best value for just the datum risk, but should be focused on getting the best estimate of the upper tail of the distribution and hence of both FT1 parameters: mode and dispersion.
b. “The history of plotting positions”
In this titled section, the note claims that the Weibull estimator P = m/(N + 1) plays a unique role. Authors that have used other estimators, for example, modal or median values, are quoted in a manner that suggests that they may have confused and contradictory views on the subject. As noted by de Haan (2007), the note does not address the differing purposes for making these estimates. When the purpose is to obtain the most likely value, the modal estimate is appropriate. If the issue is when a value may next be exceeded, this is equally likely to occur before or after the median return period. Most often, the purpose is to fit a model distribution using methods that require the expectation.
Four sources are cited in the note to support the claim for the unique role of the Weibull estimator in extreme value analysis.
1) Gumbel (1958)
Gumbel indeed advocates the use of P = m/(N + 1) but makes it clear that this produces modal plotting positions ỹ, that is, values that correspond to the mode of the sampling error. Gumbel denotes the corresponding modal recurrence interval by T(ỹ), for example, in his Table 6.1.2(2), to distinguish this from the mean recurrence interval
2) Cook (1982, 1985)
This paper and text book specifically address the bias error caused by the modal plotting positions ỹ and advocate the use of the Lieblein BLUE method (Lieblein 1974) to eliminate this bias.
3) Cook et al. (2003)
This paper did not use the Weibull estimator at all but used the mean plotting positions for
4) Summary
It is therefore clear that none of these four citations support the argument of a unique role for the Weibull estimator, and their citation is perverse.
c. “The correct probability positions for estimating return periods”
The first step in this section of the note is to derive the expectation E(P) from the binomial distribution, resulting in the Weibull estimator. Equations (4) and (5) of the note are identical to Eqs. (8) and (9) in this paper—it is not disputed that the Weibull estimator gives the expectation E(P). The second step in the note is to derive the mean rate of exceedance
The fourth step in this section of the note contains a fundamental error. Equation (9) of the note equates R =


The conclusion of this section, that the Weibull estimator gives both E(P) and
d. “Plotting positions involving a reduced variate”
Section 4 of the note makes the following assertions:
(i) The plotting position should always be distribution free.
(ii) Where the variate is transformed, the transformation should always be applied after the plotting position is determined, for example, to give the modal value ỹ = y[E(P)].
(iii) “[T]here exists a fundamental connection between the rank of an observation and the estimate of its return period” [italics in original].
(iv) The Weibull estimator should always be used: “Eq. (10) must not be manipulated based on an arbitrary choice of the scale on the ordinate axis of the graph that is devised to merely alleviate the analysis of the data.” The “fitting procedure may reflect the scale used, but the probability positions of the data must be the same regardless of the method of fitting. In other words, the plotting formula P = m/(N + 1) is valid regardless of the transformation made” [italics in original].
(v) Determination of the “plotting positions” and the fitting of observations to a model are two distinct and separate processes.
(vi) The “foundation for the use of the mean E[F(xm)] is merely that the return period R is defined as the mean time period T between events that exceed F(xm)” [italics in original; in the current paper F(xm) is Pm:N].
(vii) The “linearity, shown by Gumbel (1958) to exist as a result of plotting E[F(xm)] by Eq. (3), is lost when E(ηm) is being plotted” [in the current paper E(ηm) is E(ym:N)].
(viii) The “best estimate of a sample parameter is its mean value only if that parameter is additive” because otherwise “plotting positions no more correspond to the probability P that is required to estimate the return period.”
(ix) The “concept of distribution-specific plotting formulas in analyzing return periods should be abandoned. This causes no problems to the analysis, however, because the Weibull plotting formula P = m/(N + 1) is to be used regardless of the underlining distribution.”
In asserting i, Makkonen (2006) follows Gumbel (1958) in desiring a universal distribution-free plotting position. This is an ideal aspiration but is not essential to any pragmatic approach. As a consequence, in ii the Weibull estimator for E(P) gives the modal plotting position ỹ on Gumbel axes. Unlike Gumbel, however, the note ignores the bias error this creates, and this bias error depends on the distribution. Statisticians since Gumbel have developed methods to eliminate bias error that are necessarily distribution dependent. Assertion iii is true only for the ideal invariant process: for all finite samples there are a number of estimates each associated with a different purpose, and so assertion iv does not follow. Manipulating data to facilitate analysis is called preconditioning and is a legitimate procedure in common use in many fields. Banning its use without due cause unnecessarily eliminates a large class of valid methods.
The two important issues raised by de Haan (2007) are relevant to assertion v. The note does not address the purpose for extreme value analysis; instead citing authors who declare a specific purpose and use a rigorous axiomatic approach to derive corresponding plotting positions as examples of erroneous practice. The principal purpose of the cited authors is to obtain the best estimates of the parameters of the distribution. It is self-evident that this purpose cannot be achieved on a distribution-free basis. Assignment of the sample probability for each observation is only a precursor to fitting to a distribution, and therefore it is irrelevant to the main purpose whether this fit is made in one or two steps. Some methods, such as Lieblein (1974), fit in one step without requiring sample probabilities at all, but these are implicit in the derivation of the methods. For methods that fit using mean values, it makes no sense to adopt modal values and then to have to correct for the resulting bias in a second procedure if the fit can be obtained in one step.
Assertion vi exposes the fundamental error in the note: E(P) does not lead to the mean time between events (mean recurrence interval
Assertion vii is untrue: the meteorological examples in Gumbel [1958, graphs 6.3.1, 6.3.3(1), and 6.3.3.(2)] are forced to a linear fit using E(P). It is impossible to show that E(y) produces a less linear fit because y is linearly related to x and E() is a linear operation. The Gumbel plot for the sampling experiment, Fig. 6, demonstrates that neither of the distribution-free estimators, E(P) or E(R), reproduces the linear form of the datum distribution from which the samples were drawn, but rather they bracket the datum straight line as a convex/concave pair of curves. The distribution-dependent Gringorten (1963) estimator and the Harris (1996) method reproduce the sampled distribution almost exactly.
Assertion viii is not true. It follows from the frequency interpretation of probability that the best estimate of any random variate is its mean value accumulated over a very large number of trials, that is, tends to the expectation. The note’s assertion that the plotting position should always be determined by the Weibull estimator before any linearizing transformations are applied violates this principle. The note asserts that “P is being estimated,” but this stems from two sources of error: 1) the assumption that the required parameter is the return period R and 2) the assumption that E(R) and E(P) are concomitant, when neither assumption is true. The purpose of the Gumbel axes is to linearize the parameters of the distribution with respect to the variate, and therefore it is y, and not R, that is being estimated. Because y(P) is also nonlinear, the best estimates of P, R, and y are not mutually concomitant.
Assertion ix, that all distribution-dependent estimators should be abandoned in favor of the Weibull estimator, fails under the weight of all the contrary evidence presented above. Indeed the opposite is conclusively demonstrated in that only the distribution-dependent methods reproduce the original sampled distribution. Of course, the bias errors reduce in inverse proportion to the square root of the number of observations. The small sample size N = 9 used here was chosen to demonstrate the effect, but this value is compatible with the recommended smallest number of observations (N = 10) for extreme value analysis of annual maxima (Lieblein 1974). In developed countries, observational periods are typically 20–50 years, but many locations in developing countries have observational periods that are too short for reliable analysis.
8. Conclusions
There are two fundamental axioms that stem directly from the conventional frequency interpretation of probability:
The best estimate of any parameter in a finite-length random sample is always the expectation of that parameter.
Sampling methods, or bootstrapping, exactly replicate the frequency interpretation of probability.
From axiom 1, it follows that the expectations of nonlinearly related parameters are not concomitant in any finite-length sample, which means that the best estimate of each must be separately estimated and not derived from another parameter—this is an inconvenient truth that cannot be avoided by appealing to a fundamentalist dogma. From axiom 2, it follows that mean values obtained by bootstrapping represent the expectation to any desired accuracy, subject only to the number of trials used. Bootstrapping can therefore be used to demonstrate almost any random-based issue. In addition to providing unbiased plotting positions, a valuable role of sampling methods is to provide confidence limits for any of the observations and for any value predicted from the fits.
The case made by the note appears to stem from an incomplete appreciation of the principles of extreme value analysis and of modern reliability-based design codes. The note contains selective and inaccurate reinterpretations of old sources. The fundamental error in Eq. (9) of the note erroneously selects the Weibull estimator for return period as well as for probability E(P), and this is interpreted as imbuing “uniqueness” to this estimator. So, starting with an inappropriate emphasis on return period, followed by a fundamental error in deriving its estimator, the note reduces to nine assertions—every one of which has been shown to be wrong.
Table 4 of the note wrongly set the Weibull estimator as the datum for zero error, and, as a consequence, the Harris (1996) numerical method was perversely assigned the greatest error. The opposite has been demonstrated to be true: in exactly representing the frequency interpretation of probability, sampling methods have zero bias error
Acknowledgments
The many helpful discussions with Mr. R. I. Harris of the issues addressed by this comment are gratefully acknowledged.
REFERENCES
Beard, L. R., 1943: Statistical analysis in hydrology. Trans. Amer. Soc. Civ. Eng., 108 , 1110–1160.
BSI, 1970: Loading. Part 2: Wind loads. Code of Basic Data for the Design of Buildings, Chapter V, British Standards Institution, 55 pp.
BSI, 1995: Part 2: Code of practice for wind loads. BS6399 Loading for Buildings, British Standards Institution.
Castillo, E., 1988: Extreme Value Theory in Engineering. Academic Press, 389 pp.
Cook, N. J., 1982: Towards better estimation of extreme winds. J. Wind Eng. Ind. Aerodyn., 9 , 295–323.
Cook, N. J., 1985: Background, Damage Survey, Wind Data and Structural Classification. Part 1, The Designer’s Guide to Wind Loading of Building Structures, Butterworth Scientific, 371 pp.
Cook, N. J., 2004: Confidence limits for extreme wind speeds in mixed climates. J. Wind Eng. Ind. Aerodyn., 92 , 41–51.
Cook, N. J., and R. I. Harris, 2004: Exact and general FT1 penultimate distributions of wind speeds drawn from tail-equivalent Weibull parents. Struct. Saf., 26 , 391–420.
Cook, N. J., and R. I. Harris, 2008: Postscript to “Exact and general FT1 penultimate distributions of wind speeds drawn from tail-equivalent Weibull parents”. Struct. Saf., 30 , 1–10.
Cook, N. J., R. I. Harris, and R. J. Whiting, 2003: Extreme wind speeds in mixed climates revisited. J. Wind Eng. Ind. Aerodyn., 91 , 403–422.
Cramer, H., 1946: Mathematical Methods of Statistics. Princeton University Press, 575 pp.
de Haan, L., 2007: Comments on “Plotting positions in extreme value analysis”. J Appl. Meteor. Climatol., 46 , 396.
Efron, B., and R. J. Tibshirani, 1993: An Introduction to the Bootstrap. Chapman and Hall, 436 pp.
Gringorten, I. I., 1963: A plotting rule for extreme probability paper. J. Geophys. Res., 68 , 813–814.
Gumbel, E. J., 1958: Statistics of Extremes. Columbia University Press, 375 pp.
Harris, R. I., 1996: Gumbel re-visited—A new look at extreme value statistics applied to wind speeds. J. Wind Eng. Ind. Aerodyn., 59 , 1–22.
Hazen, A., 1914: Storage to be provided in impounding reservoirs for municipal water supply. Trans. Amer. Soc. Civ. Eng. Pap., 1308 , 1547–1550.
Lieblein, J., 1974: Efficient methods of extreme value methodology. National Bureau of Standards Rep. NBSIR 74-602, 31 pp.
Makkonen, L., 2006: Plotting positions in extreme value analysis. J. Appl. Meteor. Climatol., 45 , 334–340.
Press, W. H., B. P. Flannery, S. A. Teukolsky, and W. T. Vettering, 1989: Numerical Recipes in Pascal. Cambridge University Press, 759 pp.
von Mises, R., 1941: On the foundations of probability and statistics. Ann. Math. Stat., 12 , 91.
Weibull, W., 1939: A statistical theory of the strength of materials. Ing. Vetensk. Akad. Handl., 151 , 1–45.

Mean probability for each rank m of ranked samples of size N = 9 for n trials, accumulating as n increases, and stabilizing onto m/10.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Mean probability for each rank m of ranked samples of size N = 9 for n trials, accumulating as n increases, and stabilizing onto m/10.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1
Mean probability for each rank m of ranked samples of size N = 9 for n trials, accumulating as n increases, and stabilizing onto m/10.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Std dev of sample probability for each rank of ranked samples of size N = 9 from n = 10 000 trials. Samples match the expectations well.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Std dev of sample probability for each rank of ranked samples of size N = 9 from n = 10 000 trials. Samples match the expectations well.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1
Std dev of sample probability for each rank of ranked samples of size N = 9 from n = 10 000 trials. Samples match the expectations well.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Mean recurrence interval for each rank of ranked samples of size N = 9 for n trials, accumulating as n increases. Occasional very high, potentially infinite, sample values drive the mean recurrence interval for the highest rank toward infinity while the remainder stabilizes onto 9/(9 − m).
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Mean recurrence interval for each rank of ranked samples of size N = 9 for n trials, accumulating as n increases. Occasional very high, potentially infinite, sample values drive the mean recurrence interval for the highest rank toward infinity while the remainder stabilizes onto 9/(9 − m).
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1
Mean recurrence interval for each rank of ranked samples of size N = 9 for n trials, accumulating as n increases. Occasional very high, potentially infinite, sample values drive the mean recurrence interval for the highest rank toward infinity while the remainder stabilizes onto 9/(9 − m).
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Mean return period for each rank of ranked samples of size N = 9: open circles denote samples from n = 10 000 trials, plus signs indicate the expectation, and the times signs indicate derived from mean risk. The samples match the expectation, except where the expectation is infinite. Return periods derived from the mean risk are underestimates and therefore exaggerate design values.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Mean return period for each rank of ranked samples of size N = 9: open circles denote samples from n = 10 000 trials, plus signs indicate the expectation, and the times signs indicate derived from mean risk. The samples match the expectation, except where the expectation is infinite. Return periods derived from the mean risk are underestimates and therefore exaggerate design values.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1
Mean return period for each rank of ranked samples of size N = 9: open circles denote samples from n = 10 000 trials, plus signs indicate the expectation, and the times signs indicate derived from mean risk. The samples match the expectation, except where the expectation is infinite. Return periods derived from the mean risk are underestimates and therefore exaggerate design values.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Std dev of return period for each rank of ranked samples of size N = 9: open circles denote samples from n = 10 000 trials and plus signs indicate the expectation. The samples match the expectations, except for the top two ranks, for which the expectations are infinite.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Std dev of return period for each rank of ranked samples of size N = 9: open circles denote samples from n = 10 000 trials and plus signs indicate the expectation. The samples match the expectations, except for the top two ranks, for which the expectations are infinite.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1
Std dev of return period for each rank of ranked samples of size N = 9: open circles denote samples from n = 10 000 trials and plus signs indicate the expectation. The samples match the expectations, except for the top two ranks, for which the expectations are infinite.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Normalized Gumbel plot for the mean of n = 10 000 trials of ranked samples of size N = 9 drawn from an FT1 distribution for various plotting positions. The datum source distribution −ln[−ln(PI)] is the straight line at 45° through the origin on these standard axes. The plus signs show the expectation of probability, E(P) = m/(N + 1): values greater than the mode lie to the left of the datum line so that a straight-line fit through these points overestimates the reduced variate, but the best fit through the points is a curve, concave upward. The times signs show the expectation for return period, P[E(R)] = m/N: values lie to the right of the datum line so that a fit underestimates the reduced variate, and the best fit is a curve, convex upward. The large open circles show Gringorten’s plotting positions: values lie close to the datum line, indicating a good empirical fit—almost linear, but with a small concave upward trend. The small open circles are from the numerical method of Harris (1996): values lie along the datum line, confirming the accuracy of this method.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1

Normalized Gumbel plot for the mean of n = 10 000 trials of ranked samples of size N = 9 drawn from an FT1 distribution for various plotting positions. The datum source distribution −ln[−ln(PI)] is the straight line at 45° through the origin on these standard axes. The plus signs show the expectation of probability, E(P) = m/(N + 1): values greater than the mode lie to the left of the datum line so that a straight-line fit through these points overestimates the reduced variate, but the best fit through the points is a curve, concave upward. The times signs show the expectation for return period, P[E(R)] = m/N: values lie to the right of the datum line so that a fit underestimates the reduced variate, and the best fit is a curve, convex upward. The large open circles show Gringorten’s plotting positions: values lie close to the datum line, indicating a good empirical fit—almost linear, but with a small concave upward trend. The small open circles are from the numerical method of Harris (1996): values lie along the datum line, confirming the accuracy of this method.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1
Normalized Gumbel plot for the mean of n = 10 000 trials of ranked samples of size N = 9 drawn from an FT1 distribution for various plotting positions. The datum source distribution −ln[−ln(PI)] is the straight line at 45° through the origin on these standard axes. The plus signs show the expectation of probability, E(P) = m/(N + 1): values greater than the mode lie to the left of the datum line so that a straight-line fit through these points overestimates the reduced variate, but the best fit through the points is a curve, concave upward. The times signs show the expectation for return period, P[E(R)] = m/N: values lie to the right of the datum line so that a fit underestimates the reduced variate, and the best fit is a curve, convex upward. The large open circles show Gringorten’s plotting positions: values lie close to the datum line, indicating a good empirical fit—almost linear, but with a small concave upward trend. The small open circles are from the numerical method of Harris (1996): values lie along the datum line, confirming the accuracy of this method.
Citation: Journal of Applied Meteorology and Climatology 50, 1; 10.1175/2010JAMC2316.1
Mean frequency of values below threshold, for 10 trials each of N = 10 000 random samples.


Mean probabilities, risks, and recurrence intervals below various threshold probabilities, for n = 1 000 000 trials.


Model parameters fitted to the mean of n = 10 000 trials of ranked samples of size N = 9 drawn from an FT1 distribution.


Corrected version of the table in the note, giving error in return period R corrected for E(R) and being resorted chronologically.


Because of space restrictions, only the starting expression and final result of each proof is given in this paper. The complete proofs were submitted for review with the paper and are available on application by e-mail to the author: nick.cook@rwdi.com or njcook@ntlworld.com.
One that provided uncorrelated random numbers in the range 0–1 drawn from a rectangular distribution.
To be in accord with this condition, Castillo (1988) and Harris (1996) recommend that, in regressions, probability be taken as the dependent variable and wind speed as the independent variable, which is the reverse of the usual convention.