## 1. Introduction

In the last decade, numerical weather prediction (NWP) models have substantially improved forecasting of tropical cyclones and their tracks. This improvement is due not only to better model performance but also to ensemble forecasting, which reduces the influence of initial uncertainty on forecasts (Zhang and Krishnamurti 1997). NWP ensemble predictions of tropical cyclone tracks are generated by two different methods: The first is a consensus forecast from an ensemble of NWP models (Goerss 2000; Elsberry and Carr 2000), based on the idea that an error-minimizing forecast combination of independent forecasts improves, on average, performance. This approach has been successfully demonstrated for short-term weather and hurricane track forecasts (Fraedrich and Leslie 1987; Leslie and Fraedrich 1990). The second procedure is based on an ensemble of initial conditions that, for a single NWP model, can be provided by random, locally bred, or optimal growth vectors (e.g., Cheung 2001). In addition to improving NWP models, there is also a need to improve performance of empirical cyclone-track forecast models, because they are useful in assisting deterministic forecasts. The majority of these schemes are of the climatology–persistence (CLIPER) type introduced and successfully applied by Neumann (1981). Other nonlinear empirical methods gain performance skill by forecast error recycling, which utilizes information from past forecast errors for future predictions (Fraedrich et al. 2000), or by metric adaption of analog-based forecasts (Fraedrich and Rueckert 1998; Sievers et al. 2000), which produce ensemble predictions. In a simulated operational trial, this scheme has shown higher skill than the CLIPER scheme in both the Atlantic and eastern Pacific basins.

In this work, we apply the self-adapting analog ensemble forecast scheme to Australian tropical cyclones (Fig. 1), present case studies, compare the scheme with numerical weather prediction accuracy, and include an error–spread analysis. In section 2 the metric adaption of the analog model is introduced. In section 3, forecast skill, the relation between forecast error and ensemble spread, case studies, and a comparison with numerical schemes are discussed.

## 2. Self-adapting analog ensemble prediction: Model building

The method and model building procedures have been discussed in some detail previously (Fraedrich and Rueckert 1998; Sievers et al. 2000) so a short outline will suffice. One way of analyzing and forecasting dynamical systems with statistical methods is the embedding of the cyclone tracks in a state space spanned by measured variables, not unlike the analog hurricane track forecast based on the Euclidean metric [Hurricane Analog (HURRAN); Hope and Neumann 1970]. Here we extend this method to a scheme that adapts the state space metric to predict cyclone tracks in an error-minimizing fashion. Fraedrich and Rueckert (1998) developed such a method that iteratively reduces a user-defined forecast error by suitably fitting metric weights for components of the reconstructed state space, which enter the analog forecast scheme. Model building proceeds as follows. The first two steps are basic for analog forecasting: 1) the introduction of an error measure and, after state-space reconstruction, 2) model building, which adapts the metric weights at minimum forecast error. Step 3 optimizes the procedure.

*e*(

*t*

_{j}) represents the distance between the observed and the

*j*th analog forecast of the track position for all lead times

*i*= +6, +12, … , +72 h:

*x*

_{1}(

*t*

_{j}),

*x*

_{2}(

*t*

_{j})] denotes the longitudinal and latitudinal positions of the

*j*th-analog state,

**x**(

*t*

_{j}) = [

*x*

_{1}(

*t*

_{j}), … ,

*x*

_{D}(

*t*

_{j})] embedded in

*D*-dimensional space spanned by the dependent dataset; [

*x*

_{1}(

*t*

_{0}),

*x*

_{2}(

*t*

_{0})] is the position of the observed initial state.

In step 2, building the analog forecast scheme, a metric *d*[**x**(*t*_{0}), **x**(*t*_{j})] = ^{D}_{k}*G*_{k} tanh[*x*_{k}(*t*_{0}) − *x*_{k}(*t*_{j})]^{2} is introduced for the analog and observed states, **x**(*t*_{j}) = [*x*_{1}(*t*_{j}), … , *x*_{D}(*t*_{j})] and **x**(*t*_{0}) = [*x*_{1}(*t*_{0}), … , *x*_{D}(*t*_{0})], with metric weights *G*_{k} and embedding dimension *D.* The hyperbolic tangent function is chosen because it reduces overlearning effects and saves computational costs.

The third step finds optimal weights *G*_{k} (i.e., changes the state space optimally) for analogs that provide the best forecasts. This is achieved by a learning rule that optimizes metric weights by including ensemble forecasts with *N* ensemble members (ensemble size). We distinguish between “near” and “best neighbor” of the reference state, which denotes an analog state with small metric *d* and small forecast error *e*(*t*_{j}). The (*N* + 1) nearest neighbors *x*(*t*_{1}), … , *x*(*t*_{N+1}) of the observed state *x*(*t*_{0}) are identified within the dependent dataset. From these (*N* + 1) nearest neighbors (a) the *N* nearest neighbors are selected and (b), with respect to their individual error *e*(*t*_{j}), the *N* best neighbors are chosen, discarding the neighbor with the highest error. Note the number of nearest and best neighbors *N* is defined by ensemble size. Metric weights *G*_{k} are adapted by comparing squared distances *f*_{k} = ^{N}_{n=1}*x*_{k}(*t*_{0}) − *x*_{k}(*t*_{n})]^{2} and *b*_{k} = ^{N}_{m=1}*x*_{k}(*t*_{0}) − *x*_{k}(*t*_{m})]^{2} of the *N* nearest (*n* = 1, … , *N*) with the *N* best (*m* = 1, … , *N*) ensemble members from the observed state *x*(*t*_{0}). New metric weights are defined as *G*^{′}_{k}*G*_{k}〈*f*_{k}〉/〈*b*_{k}〉, where angle brackets denote averages over all states of the dependent dataset.

The learning rule is heuristic; that is, it is not guaranteed that new weights may not find better analogs than the old ones. All states, even those far away from each other, will try to improve the weights. As a consequence there is an overlearning effect in some cases. That is, mean forecast error grows after passing a minimum, even though weights are still converging. This effect is reduced by introducing the hyperbolic tangent, tanh[*x*_{k}(*t*_{0}) − *x*_{k}(*t*_{j})]^{2}, which, if the argument is small, is approximately [*x*_{k}(*t*_{0}) − *x*_{k}(*t*_{j})]^{2}, and for large values it is limited by 1. The learning rule is used iteratively to optimize the metric, starting with the Euclidean metric *G*_{k} = 1 (for *k* = 1, … , *D*), until a certain threshold is achieved. Here, no defined threshold is used, but a more subjective method is applied that iterates the scheme 400 times and then uses the metric weights that achieve a minimum in the dependent dataset error (for all ensemble members averaged over the dependent dataset). If metric weights are optimally adapted, forecast analogs are searched in this optimal phase space. Before estimating the final metric weights, the number of ensemble members *N* (ensemble size) is chosen, running the scheme several times with a different number *N* to obtain the optimal ensemble size. The ensemble mean forecast is defined by the arithmetic mean over all *N* ensemble members.

Before application of steps 1–3 to the cyclone track data, the dimension of the phase space needs to be defined. The embedding theorem (Sauer et al. 1991) requires a sufficient embedding dimension *D* = 2*D*_{a} + 1, where *D*_{a} is the dimension of the underlying dynamical system. It guarantees that *D* observed variables span a state space that completely embeds the dynamical system. The only available estimates on the dimension of tropical cyclone tracks [derived for the Australia region (Fraedrich and Leslie 1989)] suggest *D*_{a} ∼ 8, which leads to an embedding dimension of *D* ∼ 17, which compares well with the number of track parameters (19) used for model building (Fig. 2).

The Australian-region tropical cyclone dataset contains the following parameters: zonal and meridional cyclone center position, date, and time (UTC). All data used for training are the so-called best-track data. Each entry in the dataset provides the following input parameters for model building: zonal and meridional displacements, positions, and time. Displacements for a time lag of 6 h are used up to 24 h in the past, which together with positions and year day, characterize region and season. Data are divided into a dependent set (1958–81 with 371 cyclones) and an independent verification set (1981–2000 with 161 cyclones, or 1991–2000 with 85 cyclones). A cyclone is considered to be usable if its lifetime is equal to or exceeds the forecast period (72 h) plus 24 h for defining the state. Ensemble size is derived by applying the self-adapting analog scheme several times, increasing the number of ensemble members from 1 to 30. At an ensemble size of 20, the self-adapting scheme achieved the best performance. A time lag of at least 3 days (72 h) between observation and analog or between two analogs is used to find the nearest independent neighbors. The scheme is run 400 times to find the optimal metric weights. Figure 2 shows optimal metric weights attached to the track parameters, which are obtained after 256 iterations. The most important component is the zonal displacement of the last 6 h, followed by the meridional displacement of the last 6 h. Other zonal and meridional displacements are more important than positions.

## 3. Tropical cyclone track forecasts: Forecast error and ensemble spread

*x*

_{0},

*y*

_{0}) is the observed zonal and meridional best-track position and (

*x*

_{f},

*y*

_{f}) is the forecast position. The skill score

*s*compares performance of the analog scheme with a reference model:

*s*

*E*

_{ref}

*E*

_{analog}

*E*

_{ref}

Positive skill indicates that the analog model has lower errors than the reference and vice versa. A CLIPER-type model for the Australian region (Leslie et al. 1990, following Neumann 1972; Neumann and Pelissier 1981a,b; Pike and Neumann 1987) is chosen as our reference model, which predicts zonal and meridional cyclone displacements of the cyclones, depending on the climatological and persistent behavior of the storm. Results from the forecast experiments are presented for independent verifications using best-track and operational-track data (1981–2000 with 161 cyclones and 1991–2000 with 85 cyclones).

### a. Ensemble mean forecast error and skill

For the whole Australian basin (Fig. 3; verification dataset 1981–2000), the self-adapting analog ensemble forecasts (*N* = 20) outperform the best adapted analog model (*N* = 1) and the corresponding forecasts based on the Euclidean metric (*N* = 1 and *N* = 20). In comparison with CLIPER (diamonds), the self-adapting ensemble mean forecasts (*N* = 20) reveal, on average, positive skill of about 20% (Fig. 4a; verification dataset 1991–2000), with 40% as the maximum attained (12-h forecasts in the western domain). The regional performance of the adapted analog system is demonstrated in two ways: (i) The performance of the forecast model, adapted to the whole Australian basin, is evaluated for each subdomain (Figs. 4b–d, solid line). (ii) This evaluation may be compared with the forecast system adapted to each of the three subdomains individually (Figs. 4b–d, dashed line), which shows less skill. For example, in the northern region (Gulf of Carpentaria), the skill when compared with CLIPER deteriorates from +5% for 12-h forecasts to −15% for 48-h forecasts. It appears that the optimal analog search in the subdomains is confined to too-small ensembles of highly erratic tracks that utilize only 20% of the total domain learning set (30% and 50% in the eastern and western regions, respectively). Note that the subdomain adaptations lead to three analog models and that their regional skill cannot simply be combined as the skill of the whole basin model. Forecasts based on operational tracks did not change these results except that the Australian CLIPER performed slightly better, which may be attributed to the averaging procedures involved initially (Figs. 5a–d). Note that a similar adaptation with operational track data may, if available, improve analog forecast accuracy.

### b. Forecast error distribution

The error analysis presented above is confined to the first moments of the 20-yr best-track forecast statistics. The error distribution shows a more complete picture for the 12- and 24-h forecasts (Fig. 6). The shape resembles a chi-square distribution. The means, medians, and standard deviations for 12 h (24 h) are 〈*E*〉 = 76 (167), med(*E*) = 63 (145), and std(*E*) = 57 km (111 km); the smallest (largest) outliers are 0.7 (0.6) and 764 km (1266 km).

### c. Comparison

The performance of the adapted ensemble analog scheme is compared with the NWP model forecasts of the Met Office (UKMO) and the official forecasts of the Australian Bureau of Meteorology (BOM). The mean forecast errors of the UKMO NWP [indicated in Fig. 3 (information available online at www.metoffice.com) as a shaded area] characterize the decade from 1988 to 2000. During this time, the quality of numerical predictions of tropical cyclones has improved considerably because of better data assimilation and numerical techniques. Still, up to 24-h lead time (and even beyond that), the accuracy of the adapted analog ensemble forecast scheme is surprisingly high. Likewise, comparison of the skills of the official BOM and adapted analog ensemble forecasts shows the good performance of the latter, in particular in both the eastern and western basins (Fig. 5). Only the northern-domain 24-h BOM predictions appear to be superior for the reasons given above, which affects the overall skill score at that lead time. Table 1 summarizes the results.

### d. Case studies

Two case studies are presented for the Tropical Cyclones (TC) Rosita and Jacob (Fig. 7). The analog ensemble mean predictions up to 24 h (with operational-track data) are compared with the 24-h predictions with UKMO NWP and the official 24-h forecasts of BOM. All forecasts show good performance for Jacob, but they miss Rosita's eastward turn toward the coast. On the last leg, only the analog scheme predicts landfall; UKMO and BOM predict stationarity. The curvature prediction may be improved by exploiting ensemble forecasts, if the spread of their members is sufficiently large. Climatologically relevant TC track clusters may provide useful forecast guidance, if probabilities of occurrence can be associated with them.

### e. Ensemble forecast spread

Ensemble spread is a measure of dispersion of ensemble members in terms of their standard deviation about the ensemble mean. In a perfect model and perfect ensemble environment, ensemble spread provides a measure of expected forecast error. This hypothesis is, in practical forecasting, not fulfilled. Thus error versus spread displays a highly scattered relationship [for tropical cyclones see Elsberry and Carr (2000) and Goerss (2000), for NWP forecasts see Molteni et al. (1996), and for idealized external predictability experiments see Fraedrich and Ziehmann-Schlumbohm (1994)]. Sampling the rms error in spread bins of about 7 km (with about 30 forecast samples) reveals some structure for the sample mean and the sample median of the rms error changing with ensemble spread (Fig. 8; 1981–2000, best model *N* = 20): For 12-h forecasts, both average and median errors made by the ensemble mean forecasts increase linearly (by a factor of about 0.5) with ensemble spread. This relationship deteriorates with increasing lead time. The average 24-h forecast errors increase linearly with spread (by a factor of 0.3), whereas the median does not change but quartiles grow.

### f. Limits of predictability (forecasts and forecast-error predictions)

Analyzing forecast errors is based on the joint distributions of two fields, the ensemble mean forecasts and their realizations. Here, error analysis is confined to means of distances between the two fields and their decline (with increasing lead time). When dropping below the performance of a reference forecast model (CLIPER), a limit of predictability is attained, which the adapted analog forecasts reach near 72 h (Figs. 3 and 5). However, predictions of forecast errors require the inclusion of forecast ensembles and a suitable verification method. This is provided by the error–spread relation, which is defined by the ensemble mean forecast error and the ensemble-spread relation. Here the error-spread analysis evaluates the mean of forecast errors in classes of ensemble spread. This relation deteriorates (with increasing lead time) in comparison with the perfect model and perfect ensemble reference (Fig. 8). The adapted analog ensemble forecasts reach their error–spread performance limit within at least 24 h.

### g. Central pressure

The central pressure, which is reported only for some of the tropical cyclones, may be employed by the analog predictions, after ensemble members have been identified through an optimal analog track search. These central pressures (deviating from the initial value) are used to predict ensemble mean and spread for each individual forecast. The ensemble means are evaluated only if at least 50% of the ensemble members provide central pressure (Fig. 9). Up to 36 h, the rms error grows almost linearly to about 15 hPa (climatological standard deviation). After that the official BOM predictions (1985–2001) improve over the analog forecasts, which supports the predictability limit of the analog model estimated by the error–spread analysis of the analog-tracks forecasts. However, the spread of central pressure ensembles does not contain information that allows predictions of the forecast errors (error–spread relation, not shown). In general, it is not surprising that these central pressure analog forecasts yield unsatisfactory results, because neither has central pressure been used in the optimal analog search nor have the ensemble members been selected according to their life cycle and, therefore, the associated central pressure. A more suitable central pressure forecast model, based on optimally adapted ensembles, is in preparation.

## 4. Discussion and conclusions

A self-adapting analog forecast scheme has been developed for ensemble predictions of tropical cyclones tracks in the Australian region. Starting with the Euclidean metric and a given set of states defined by best-track data, the model learns how to weight components of the predictor states by minimizing forecast error. These weights, which result from the metric adaption, are an indication of the importance of corresponding components. They show that displacements and season are more important for an analog search than are cyclone positions. When comparing different analog models, it is shown that both ensemble forecasting and metric adaption lead to substantial forecast improvements. Comparison of the self-adapting analog ensemble forecasts with an Australian-region CLIPER reference shows different results for each of three regions of the Australian basin, with positive (negative) skill in the eastern and western (northern) domain. Further comparison with NWP model forecasts of the Met Office and the official forecasts of the Australian Bureau of Meteorology demonstrates the good performance of the analog ensemble scheme in the average; two TC cases are presented to show the guidance provided by the analog scheme for situations in which it is needed. Mean (and median) forecast errors in spread classes grow linearly with increasing ensemble spread. Because this relation deteriorates with increasing lead time, it may qualify as a performance measure of ensemble predictions. Operational TC ensemble forecasts are available online (http://visibility.dkrz.de/TC).

## Acknowledgments

Thanks are given to Alan Sharp, Frank Woodcock, and Jon Gill of the Australian Bureau of Meteorology, who provided the datasets of the Australian-region tropical cyclone tracks, the Australian Bureau of Meteorology CLIPER, and official forecast performances. Jim Arthur of the Northern Territory Regional Office, Jim Davidson of the Queensland Regional Office, and their staff provided insight and support. The constructive reviews by Lance M. Leslie and Steve Lyons are appreciated.

## REFERENCES

Cheung, K. K. W., 2001: Ensemble forecasting of tropical cyclone motion: Comparison between regional bred modes and random perturbations.

,*Meteor. Atmos. Phys.***78****,**23–34.Elsberry, R. L., and Carr L. E. , 2000: Consensus of dynamical tropical cyclone track forecasts—error versus spread.

,*Mon. Wea. Rev.***128****,**4131–4138.Fraedrich, K., and Leslie L. M. , 1987: Combining predictive schemes in short-term forecasting.

,*Mon. Wea. Rev.***115****,**1640–1644.Fraedrich, K., and Leslie L. M. , 1989: Estimates of cyclone track predictability. Part I: Tropical cyclones in the Australian region.

,*Quart. J. Roy. Meteor. Soc.***115****,**79–92.Fraedrich, K., and Ziehmann-Schlumbohm C. , 1994: Predictability experiments of persistence forecasts in a red noise atmosphere.

,*Quart. J. Roy. Meteor. Soc.***120****,**387–428.Fraedrich, K., and Rueckert B. , 1998: Metric adaption for analog forecasting.

,*Physica A***253****,**379–393.Fraedrich, K., Morison R. , and Leslie L. M. , 2000: Improved tropical cyclone track predictions by error recycling.

,*Meteor. Atmos. Phys.***74****,**51–56.Goerss, J., 2000: Tropical cyclone track forecasts using an ensemble of dynamical models.

,*Mon. Wea. Rev.***128****,**1187–1193.Hope, J. R., and Neumann C. J. , 1970: An operational technique for relating the movement of existing tropical cyclones to past tracks.

,*Mon. Wea. Rev.***98****,**925–933.Leslie, L. M., and Fraedrich K. , 1990: Reduction of tropical cyclone position errors using an optimal combination of independent forecasts.

,*Wea. Forecasting***5****,**158–161.Leslie, L. M., Holland G. J. , Glover M. , and Woodcock F. , 1990: The skill of tropical cyclone position forecasting in the Australian region.

,*Aust. Meteor. Mag.***38****,**87–92.Molteni, F., Buizza R. , Palmer T. N. , and Petroliagis T. , 1996: The ECMWF ensemble prediction system: Methodology and validation.

,*Quart. J. Roy. Meteor. Soc.***122****,**73–119.Neumann, C. J., 1972: An alternate to the HURRAN (Hurricane Analog) tropical cyclone forecast system. NOAA Tech. Memo. NWS SR-62, 32 pp.

Neumann, C. J., 1981: Trends in forecasting the tracks of Atlantic tropical cyclones.

,*Bull. Amer. Meteor. Soc.***62****,**1473–1485.Neumann, C. J., and Pelissier J. M. , 1981a: An analysis of Atlantic tropical cyclone forecast errors, 1970–1979.

,*Mon. Wea. Rev.***109****,**1248–1266.Neumann, C. J., and Pelissier J. M. , 1981b: Models for the prediction of tropical cyclone motion over the North Atlantic: An operational evaluation.

,*Mon. Wea. Rev.***109****,**522–538.Pike, A. C., and Neumann C. J. , 1987: The variation of track forecast difficulty among tropical cyclone basins.

,*Wea. Forecasting***2****,**237–241.Sauer, T., Yorke J. A. , and Casdagli M. , 1991: Embedology.

,*J. Stat. Phys.***65****,**579–615.Sievers, O., Fraedrich K. , and Raible C. C. , 2000: Self-adapting analog ensemble predictions of tropical cyclone tracks.

,*Wea. Forecasting***15****,**623–629.Zhang, Z., and Krishnamurti T. N. , 1997: Ensemble forecasting of hurricane tracks.

,*Bull. Amer. Meteor. Soc.***78****,**2785–2795.

Optimal adapted weights of the analog ensemble forecast model obtained from the learning set (1958–81) for 20 ensemble members. The dashed line shows the (initial) Euclidean weights. The 19 track parameters are defined at the bottom

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Optimal adapted weights of the analog ensemble forecast model obtained from the learning set (1958–81) for 20 ensemble members. The dashed line shows the (initial) Euclidean weights. The 19 track parameters are defined at the bottom

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Optimal adapted weights of the analog ensemble forecast model obtained from the learning set (1958–81) for 20 ensemble members. The dashed line shows the (initial) Euclidean weights. The 19 track parameters are defined at the bottom

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Australian tropical cyclone position forecast error (rms error; km): mean forecast error (1981–2000) changing with forecast lead time (h) for the ensemble analog prediction models based on best-track data. The ensemble size is *N* = 1 and *N* = 20 for the optimal and Euclidean metric; the Australian CLIPER (12-h intervals up to 48 h) is also included. NWP forecasts of the UKMO (1991–2000) are indicated by shading

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Australian tropical cyclone position forecast error (rms error; km): mean forecast error (1981–2000) changing with forecast lead time (h) for the ensemble analog prediction models based on best-track data. The ensemble size is *N* = 1 and *N* = 20 for the optimal and Euclidean metric; the Australian CLIPER (12-h intervals up to 48 h) is also included. NWP forecasts of the UKMO (1991–2000) are indicated by shading

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Australian tropical cyclone position forecast error (rms error; km): mean forecast error (1981–2000) changing with forecast lead time (h) for the ensemble analog prediction models based on best-track data. The ensemble size is *N* = 1 and *N* = 20 for the optimal and Euclidean metric; the Australian CLIPER (12-h intervals up to 48 h) is also included. NWP forecasts of the UKMO (1991–2000) are indicated by shading

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Australian tropical cyclone forecast skill with reference to the CLIPER zero skill or reference forecasts (dashed line) based on best-track data. The skills of adapted analog ensemble forecast models (1991–2000) change with forecast lead time (h): (a) the Australian region, and the (b) eastern (142°–160°E), (c) western (90°–125°E), and (d) northern (125°–142°E) areas. Analog forecasts are adapted to the whole basin and the subdomains (Analog REG) and are given by the full and dashed lines

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Australian tropical cyclone forecast skill with reference to the CLIPER zero skill or reference forecasts (dashed line) based on best-track data. The skills of adapted analog ensemble forecast models (1991–2000) change with forecast lead time (h): (a) the Australian region, and the (b) eastern (142°–160°E), (c) western (90°–125°E), and (d) northern (125°–142°E) areas. Analog forecasts are adapted to the whole basin and the subdomains (Analog REG) and are given by the full and dashed lines

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Australian tropical cyclone forecast skill with reference to the CLIPER zero skill or reference forecasts (dashed line) based on best-track data. The skills of adapted analog ensemble forecast models (1991–2000) change with forecast lead time (h): (a) the Australian region, and the (b) eastern (142°–160°E), (c) western (90°–125°E), and (d) northern (125°–142°E) areas. Analog forecasts are adapted to the whole basin and the subdomains (Analog REG) and are given by the full and dashed lines

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Same as Fig. 4 but based on operational-track data: The skill of the adapted analog ensemble forecasts (1991–2000) is compared with the BOM official forecasts (triangles)

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Same as Fig. 4 but based on operational-track data: The skill of the adapted analog ensemble forecasts (1991–2000) is compared with the BOM official forecasts (triangles)

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Same as Fig. 4 but based on operational-track data: The skill of the adapted analog ensemble forecasts (1991–2000) is compared with the BOM official forecasts (triangles)

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Forecast error (rms error; km) distribution for 12- (thick line) and 24-h (thin line, shaded) forecasts provided by the analog ensemble prediction model (optimal metric with *N* = 20 ensemble members)

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Forecast error (rms error; km) distribution for 12- (thick line) and 24-h (thin line, shaded) forecasts provided by the analog ensemble prediction model (optimal metric with *N* = 20 ensemble members)

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Forecast error (rms error; km) distribution for 12- (thick line) and 24-h (thin line, shaded) forecasts provided by the analog ensemble prediction model (optimal metric with *N* = 20 ensemble members)

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

The 24-h forecasts (dashed) of the tracks of Tropical Cyclones (left) Rosita (from 17 to 19 Apr 2000) and (right) Jacob (from 2 to 7 Feb 1996): (a), (b) adapted analog ensemble model (open circles, every 6h), (c), (d) UKMO (open squares), and (e), (f) official BOM forecasts [open triangles; A. Sharp (2001, personal communication)]. The observed cyclone positions are indicated as crosses, and the initial positions are shown as full circles.

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

The 24-h forecasts (dashed) of the tracks of Tropical Cyclones (left) Rosita (from 17 to 19 Apr 2000) and (right) Jacob (from 2 to 7 Feb 1996): (a), (b) adapted analog ensemble model (open circles, every 6h), (c), (d) UKMO (open squares), and (e), (f) official BOM forecasts [open triangles; A. Sharp (2001, personal communication)]. The observed cyclone positions are indicated as crosses, and the initial positions are shown as full circles.

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

The 24-h forecasts (dashed) of the tracks of Tropical Cyclones (left) Rosita (from 17 to 19 Apr 2000) and (right) Jacob (from 2 to 7 Feb 1996): (a), (b) adapted analog ensemble model (open circles, every 6h), (c), (d) UKMO (open squares), and (e), (f) official BOM forecasts [open triangles; A. Sharp (2001, personal communication)]. The observed cyclone positions are indicated as crosses, and the initial positions are shown as full circles.

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Relation between forecasts error and ensemble spread: median (full circles with 25% quantiles) and mean (dotted) of the great-circle distance rms error in spread bins of about 7 km (with about 30 forecast samples). The linear slope for both the mean and median is indicated. (a)–(d) From 6- to 24-h forecasts.

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Relation between forecasts error and ensemble spread: median (full circles with 25% quantiles) and mean (dotted) of the great-circle distance rms error in spread bins of about 7 km (with about 30 forecast samples). The linear slope for both the mean and median is indicated. (a)–(d) From 6- to 24-h forecasts.

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Relation between forecasts error and ensemble spread: median (full circles with 25% quantiles) and mean (dotted) of the great-circle distance rms error in spread bins of about 7 km (with about 30 forecast samples). The linear slope for both the mean and median is indicated. (a)–(d) From 6- to 24-h forecasts.

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Australian tropical cyclone central pressure forecast error (rms error; hPa): mean forecast error (1981–2000) changing with forecast lead time (h) for the optimal ensemble analog prediction model based on best-track data. The official BOM forecasts (1985–2001) are also included (triangles)

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Australian tropical cyclone central pressure forecast error (rms error; hPa): mean forecast error (1981–2000) changing with forecast lead time (h) for the optimal ensemble analog prediction model based on best-track data. The official BOM forecasts (1985–2001) are also included (triangles)

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Australian tropical cyclone central pressure forecast error (rms error; hPa): mean forecast error (1981–2000) changing with forecast lead time (h) for the optimal ensemble analog prediction model based on best-track data. The official BOM forecasts (1985–2001) are also included (triangles)

Citation: Weather and Forecasting 18, 1; 10.1175/1520-0434(2003)018<0003:AEFOTC>2.0.CO;2

Tropical cyclone forecast error (rms error; km) in the total Australian basin: mean forecast error (1981–2000) changing with forecast lead time (h) for the ensemble analog prediction models based on operational data with optimal metric; the Australian CLIPER (12-h intervals up to 48 h); 24-h NWP forecasts of UKMO (1991–2000); and 12-h official forecasts of BOM (1985–2001). All forecasts are based on operational data