Forecasting Northern Australian Summer Rainfall Bursts Using a Seasonal Prediction System

Tim Cowan aCentre for Applied Climate Sciences, University of Southern Queensland, Toowoomba, Australia
bBureau of Meteorology, Melbourne, Australia

Search for other papers by Tim Cowan in
Current site
Google Scholar
PubMed
Close
,
Matthew C. Wheeler bBureau of Meteorology, Melbourne, Australia

Search for other papers by Matthew C. Wheeler in
Current site
Google Scholar
PubMed
Close
,
S. Sharmila aCentre for Applied Climate Sciences, University of Southern Queensland, Toowoomba, Australia
bBureau of Meteorology, Melbourne, Australia

Search for other papers by S. Sharmila in
Current site
Google Scholar
PubMed
Close
,
Sugata Narsey bBureau of Meteorology, Melbourne, Australia

Search for other papers by Sugata Narsey in
Current site
Google Scholar
PubMed
Close
, and
Catherine de Burgh-Day bBureau of Meteorology, Melbourne, Australia

Search for other papers by Catherine de Burgh-Day in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

Rainfall bursts are relatively short-lived events that typically occur over consecutive days, up to a week. Northern Australian industries like sugar farming and beef are highly sensitive to burst activity, yet little is known about the multiweek prediction of bursts. This study evaluates summer (December–March) bursts over northern Australia in observations and multiweek hindcasts from the Bureau of Meteorology’s multiweek to seasonal system, the Australian Community Climate and Earth-System Simulator, Seasonal version 1 (ACCESS-S1). The main objective is to test ACCESS-S1’s skill to confidently predict tropical burst activity, defined as rainfall accumulation exceeding a threshold amount over three days, for the purpose of producing a practical, user-friendly burst forecast product. The ensemble hindcasts, made up of 11 members for the period 1990–2012, display good predictive skill out to lead week 2 in the far northern regions, despite overestimating the total number of summer burst days and the proportion of total summer rainfall from bursts. Coinciding with a predicted strong Madden–Julian oscillation (MJO), the skill in burst event prediction can be extended out to four weeks over the far northern coast in December; however, this improvement is not apparent in other months or over the far northeast, which shows generally better forecast skill with a predicted weak MJO. The ability of ACCESS-S1 to skillfully forecast bursts out to 2–3 weeks suggests the bureau’s recent prototype development of a burst potential forecast product would be of great interest to northern Australia’s livestock and crop producers, who rely on accurate multiweek rainfall forecasts for managing business decisions.

© 2021 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Tim Cowan, timothy.cowan@usq.edu.au

Abstract

Rainfall bursts are relatively short-lived events that typically occur over consecutive days, up to a week. Northern Australian industries like sugar farming and beef are highly sensitive to burst activity, yet little is known about the multiweek prediction of bursts. This study evaluates summer (December–March) bursts over northern Australia in observations and multiweek hindcasts from the Bureau of Meteorology’s multiweek to seasonal system, the Australian Community Climate and Earth-System Simulator, Seasonal version 1 (ACCESS-S1). The main objective is to test ACCESS-S1’s skill to confidently predict tropical burst activity, defined as rainfall accumulation exceeding a threshold amount over three days, for the purpose of producing a practical, user-friendly burst forecast product. The ensemble hindcasts, made up of 11 members for the period 1990–2012, display good predictive skill out to lead week 2 in the far northern regions, despite overestimating the total number of summer burst days and the proportion of total summer rainfall from bursts. Coinciding with a predicted strong Madden–Julian oscillation (MJO), the skill in burst event prediction can be extended out to four weeks over the far northern coast in December; however, this improvement is not apparent in other months or over the far northeast, which shows generally better forecast skill with a predicted weak MJO. The ability of ACCESS-S1 to skillfully forecast bursts out to 2–3 weeks suggests the bureau’s recent prototype development of a burst potential forecast product would be of great interest to northern Australia’s livestock and crop producers, who rely on accurate multiweek rainfall forecasts for managing business decisions.

© 2021 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Tim Cowan, timothy.cowan@usq.edu.au

1. Introduction

The wet season onset over tropical northern Australia typically occurs in October, with the monsoonal rains peaking in February and withdrawing by April (Berry and Reeder 2016; Lisonbee et al. 2019; Wheeler and McBride 2005). Around 80% of northern Australia’s annual mean rainfall occurs between October and March (Sharmila and Hendon 2020), punctuated by episodic rainfall events, known as rainfall bursts, that mostly last between 3 and 10 days (Moise et al. 2020). Rainfall bursts over the far northern coast are often associated with westerly wind bursts stemming from active Madden–Julian oscillation (MJO) pulses passing through the Maritime and western Pacific sectors every 30–60 days (Hendon and Liebmann 1990a,b). However, a recent study by Narsey et al. (2017) pinpointed a significant influence on burst activity from midlatitude Rossby wave trains and absolute vorticity fluxes from the southern monsoon boundary; these southerly flux–induced events peak in November but are dominant throughout the October–March wet season and are less influenced by the MJO.

Rainfall burst events are crucial for sustaining pasture growth throughout the wet season (e.g., October–March) which allows the livestock and agricultural sectors to prosper across northern Australia’s arid, semiarid and tropical savanna regions (Mollah and Cook 1996). Important beef cattle regions around the Gulf of Carpentaria have seen rainfall variability increase by around 20% since the 1960s, which has driven an increase in pastoral growth variability (Cobon et al. 2019). There has been strong interest from northern producers to use seasonal forecasts to help with operational decisions with respect to crop and livestock production (e.g., Brown et al. 2019), with beef producers pushing for a multiweek forecast product that describes significant rainfall events like bursts. Improving the accuracy and reliability of multiweek forecasts, particularly related to rainfall, would likely lead to increased returns for northern graziers (An-Vo et al. 2019). For risk-averse producers, more accurate predictions of extreme wet season rainfall beyond the 1–2-week lead time may partially prevent damaging stock losses by increasing forewarning times of consecutive heavy rainfall days (Cowan et al. 2019). While the economic value of a skillful rainfall forecast to northern cattle industries may be relatively low (less than $2 per head of cattle for northeast Queensland; Cobon et al. 2020), for other industries like sugar and cotton, the benefits of an accurate forecast are strongly linked to the extreme nature of the wet conditions (Darbyshire et al. 2020). Therefore, the main purposes of this study are to derive a simple burst definition fit for purpose across northern Australia, evaluate the ability of the Bureau of Meteorology seasonal forecast system to predict bursts, and showcase a real-time burst prototype forecast product.

There is currently no overarching wet season burst definition for Australia’s monsoon region. One of the first definitions, derived by Troup (1961), focused on area-averaged rainfall and low-level winds near the northern Australian city of Darwin, and considered events after 1 November (i.e., did not capture premonsoonal bursts). Drosdowsky (1996) extended this to investigate the relationship between Darwin rainfall and westerly wind bursts and found good agreement with the number of active rain and wind events. More recently, dynamical insights into Australian monsoon bursts were described by Berry and Reeder (2016) using a rainfall-only definition, whereby area-averaged rainfall over tropical northern Australia has to rapidly transition from a 0.5 standard deviation below the smoothed seasonally varying daily climatology to 0.5 standard deviation above in fewer than 7 days. Using Darwin rainfall, Moise et al. (2020) devised a more straightforward burst definition, where daily rainfall must exceed its long-term average rainfall for at least 3 out of 5 days. This approach acts to remove isolated rainfall events from consideration in the same manner that the Berry and Reeder (2016) method does not classify short rainfall peaks as bursts. The Moise et al. (2020) study found that around 85% of Darwin’s December–March rainfall comes from bursts with between 14% and 48% of days deemed as burst days.

World-leading subseasonal-to-seasonal models, like the European Centre for Medium-Range Weather Forecasts system, can skillfully predict daily rainfall intensity out to weeks 3 and 4 over northern Australia, mainly because of intraseasonal and interannual modes of variability (Moron and Robertson 2020). The Bureau of Meteorology’s multiweek to seasonal forecast system, the Australian Community Climate and Earth-System Simulator, Seasonal version 1 (ACCESS-S1), also shows potential in predicting certain extreme rainfall indices out to a month (King et al. 2020). It is well known that MJO associated convection and circulation anomalies can affect subseasonal rainfall over Australia, acting as one of the primary sources of subseasonal predictability (Marshall and Hendon 2015). A recent study (Marshall et al. 2021) using ACCESS-S1’s 11-member ensemble forecast shows the model’s good skill in predicting extreme weekly summer rain associated with predicted strong MJO amplitude across regions such as northwest Australia and southeast Queensland over the 1990–2012 period. The same study showed higher skill around regions to the south of the Gulf of Carpentaria when the MJO is strong in the spring (September–November) season; however, there is little skill in summer. It remains to be determined whether the prediction skill associated with the MJO extends to multiday rainfall bursts that exceed lesser extreme thresholds.

In this study, our main focus is on assessing the prediction skill of rainfall bursts over northern Australia in ACCESS-S1 from a multiweek perspective. This includes investigating ACCESS-S1 hindcast biases and dependency of prediction skill on strong MJO activity (e.g., Marshall et al. 2021). This work builds upon the ACCESS-S1 hindcast skill evaluation of the northern rainfall onset (Cowan et al. 2020), the date when 50 mm of rain has accumulated from 1 September, and considered a proxy for the start of northern pasture growth (Lo et al. 2007). While the northern rainfall onset assessment was more seasonally focused and tied in with the role of El Niño–Southern Oscillation (ENSO), the forecast time scale of this study is multiweek, which is why the MJO potentially plays an important role. Like Cowan et al. (2020), we also showcase a real-time forecast example and evaluation of a burst event using our newly developed prototype product (see section 2c for product description), which will be made operational in 2022.

We use a simple definition of consecutive (or near-consecutive) daily rainfall totals over an accumulation window (detailed in section 2c) to represent bursts, rather than rainfall amounts that are referenced to historical measures such as standard deviations or percentiles. Our simple definition makes it more suitable for developing forecast products that are easy for end-users to understand and can be modified to suit different climatic regions. Knowledge of the forecast system hindcast accuracy gives an end user the necessary background information regarding model biases over their region and month(s) of interest, and how forecast skill changes with lead time. The observational datasets, ACCESS-S1 prediction system, and burst definition(s) are detailed in section 2. Observed burst behavior, hindcast biases and skill, the influence of the MJO, and an analysis of forecasts from early 2021 are shown in section 3. Discussions and conclusions are presented in section 4.

2. Data and methods

a. Observational datasets

For defining observed bursts, we employed the 5-km gridded Australian rainfall from the Australian Water Availability Project (AWAP; Jones et al. 2009). Northern Australia is defined as land points north of 29°S to encompass important pastoral regions in the southeast of the state of Queensland, although most burst activity occurs equatorward of 20°S (Narsey et al. 2017). To represent regional cloudiness, we used daily Outgoing Longwave Radiation (OLR) data from National Oceanic and Atmospheric Administration (NOAA), spatially interpolated to a 2.5° × 2.5° grid using the nearest-neighbor method (Liebmann and Smith 2006). We extracted rainfall data from October 1960 to April 2018 to evaluate the long-term change in burst behavior, and OLR for the hindcast period 1990–2012.

The analysis focused on December–March, when the majority of monsoon rain falls across northern Australia (Moise et al. 2020). To assess the influence of ENSO on observed burst activity, we used the oceanic Niño index (ONI) to determine significant El Niño and La Niña events from 1960/61 to 2017/18. The ONI is a 3-month running mean of sea surface temperature (SST) anomalies in the region encompassing Niño-3.4 (5°S–5°N, 170°–120°W). The ONI was calculated using observations from the Extended Reconstructed Sea Surface Temperature, version 5 dataset (ERSST.v5; Huang et al. 2017), derived from Argo floats and the International Comprehensive Ocean–Atmosphere Dataset Release 3.0. El Niño and La Niña events are defined as when the ONI threshold reaches ±0.5°C for four consecutive 3-month seasons from October to December through January to March, resulting in 17 El Niño and 16 La Niña events. To make near-equal sample sizes, we included 1971/72 and 1983/84 in the La Niña subset despite the ONI only reaching −0.4°C in January–March.

b. ACCESS-S1 forecast system

We examined the prediction skill for northern Australian bursts in the ACCESS-S1 coupled seasonal forecast system (Hudson et al. 2017). This system consists of the Met Office’s Global Coupled model configuration 2 (GC2) forecast system (MacLachlan et al. 2015), coupled to a land surface model: Joint U.K. Land Environment Simulator (JULES; Walters et al. 2017). The ocean and sea ice component of GC2 is initialized using the Nucleus for European Modeling of the Ocean assimilation (Megann et al. 2014). Further details on the origins of the atmospheric, oceanic and sea ice initial conditions (and coupler) are provided in Table 1 of Hudson et al. (2017).

The ACCESS-S1 hindcast suite consists of an 11-member ensemble that are initialized four times a month (1st, 9th, 17th, 25th) over 1990–2012. We used hindcasts calibrated using the quantile–quantile approach against the 5-km AWAP observations (Australian Bureau of Meteorology 2019). This corrects both the mean and shape of ACCESS-S1’s rainfall distribution, giving the calibrated hindcasts much-improved skill in forecasting wet season precursors, compared against raw and mean-bias-corrected hindcasts (Cowan et al. 2020). While each hindcast member runs out to 7 months from initialization, we focus on the first 4 weeks (28 days) of each hindcast due to the lack of skill in predicting intraseasonal drivers associated with extreme rainfall, like the MJO, beyond one month (Camp et al. 2018; King et al. 2020; Marshall and Hendon 2019). Here, a week 1 (or weeks 1–2) forecast refers to a lead time 0 forecast; as such, values for week 2 (or weeks 2–3) refer to a lead time 1 week (or fortnight) forecast. We initially focused on the hindcasts initialized on the first day of the month (but extend to other dates when assessing the association with the MJO).

To evaluate whether a strong or weak MJO amplitude influences the burst prediction skill, we utilized the real-time multivariate MJO (RMM) indices applied to each ACCESS-S1 ensemble member (Marshall and Hendon 2019). In this study, the MJO is deemed to be strong (or weak) if the ensemble mean of the individual member RMM amplitudes (RMM12+RMM22) is greater (or less) than 1.2 for the entire forecast period (e.g., seven days for week lead times). This separation allows a clear distinction between strong and weak MJO cycles, and the exclusion of instances where the MJO transitions from a weak to a strong amplitude (or vice versa) within the forecast period. Our determination of weak or strong MJO amplitudes is slightly more conservative than the strong MJO amplitude > 1 definition in Marshall et al. (2021).

c. Rainfall burst definition

We applied a simple definition to diagnose a burst event, where the daily rainfall accumulation had to exceed a given threshold for a set number of days. The Northern Queensland beef producers often use a rainfall accumulation definition for determining their “green break of the season,” a time of year when land management decisions regarding moving cattle onto new pastures are necessary (Balston and English 2009). Our simplistic definition makes it easier for producers to interpret, as opposed to more complex definitions (e.g., Narsey et al. 2018), with the rainfall threshold able to be modified to suit particular user’s needs.

For this study, we settled on one arbitrary burst definition:
B303dburst=30mmin3days.
Our definition was chosen in part through feedback from beef cattle industry representatives throughout northern Australia, given the objective of this research is to produce a useful, practical, and user-friendly burst forecast product. As such, our definition is less applicable in semiarid environments, but is similar to that of Mollah and Cook (1996) of 50 mm over three days with a minimum of 2 mm on any day, used as a criterion for sowing crops in the Northern Territory’s western Top End. Other utilities of a simple threshold definition of extended rain include hay cutting and bailing (D. Rea, Fitzroy Basin Association 2020, personal communication), timing of crop harvesting (Mollah and Cook 1996), and even the potential for wheat sowing in regions outside of the tropical north (Kerr and Abrecht 1992).

We identified burst days when a 3-day moving window met the B303d criteria for any day above 0.2 mm day−1 (first condition), consistent with a prototype burst product developed within the Bureau of Meteorology. The first condition removes nonrainfall event days (Dey et al. 2020), even if they occur midburst event. Table 1 shows a mock rainfall example over a period of 9 days. Applying the first condition results in two burst events (days 3–4 and 7–9). Applying the secondary above 0.2 mm day−1 condition removes the middle day of the second burst event as day 8 records no rain. In later analysis, we tested the sensitivity of observed burst day frequency to different burst thresholds (e.g., 20, 50, 70 mm), keeping the period constant at 3 days. Extending the period out to 5 or 7 days tended to increase the number of observed burst days per season, but with less intensity per individual burst day (i.e., because the burst influence is spread over a longer time window). We also tested the sensitivity of raising the minimum threshold to 2 and 5 mm day−1 on any day and found that this did not affect the hindcast skill or the magnitude of the model biases.

Table 1.

Explanation of how a B303d burst event was defined for this study. The numeral 1 values in bold in the bottom row indicate the defined burst event in a 9-day period after applying the two conditions. The assumption is that the 2 days prior to day 1 are 0-mm days.

Table 1.

From the burst definition and a selection of different thresholds, we have developed a prototype burst forecast product, called “burst potential,” derived from calibrated rainfall forecasts from ACCESS-S1. As highlighted in a case study later in the paper, the burst potential displays the likelihood of a burst event starting within the forecast period, with the event able to extend past the forecast period. The burst potential can be directly compared to an observed climatological (1960–2018) probability of a burst event occurring within the week in question. Currently, the burst potential has been made available to selected project stakeholders via the Bureau’s Forecast Viewing Tool (de Burgh-Day et al. 2020). Hence, the main motivation of this study is to provide a bias and skill assessment of rainfall bursts from ACCESS-S1 hindcasts for the purpose of improving confidence in burst potential forecasts.

3. Results

a. Observed burst activity, 1960–2017

Intraseasonal monsoon activity in the form of short bursts of rainfall is often associated with broadscale convective systems, as determined from satellite OLR measurements (Wheeler and McBride 2005). A visual representation of burst activity from October 2017 to April 2018 is shown in Fig. 1 over four beef cattle stations: Dampier Downs (northern Western Australia; 18.52°S, 123.45°E), Mathison Station (north Northern Territory; 15.12°S, 131.69°E), Gregory Downs (northwest Queensland; 18.65°S, 139.25°E), and Charters Towers (northeast Queensland; 20.05°S, 146.27°E). This showcases the diverse burst behavior across northern Australia, a region influenced by the MJO (Hendon and Liebmann 1990b) and midlatitude interactions (Narsey et al. 2017). Here, we applied the B303d definition to individual AWAP grid points that best represent the cattle station locations. Also shown are weighted area averaged OLR deviations (from the 1980–2012 long-term daily mean) over extended regions that encompass the cattle stations. Four prominent active convective anomalies (purple anomalies in Fig. 1) were observed from late December 2017 to mid-February 2018 over far northwest Australia, with five burst events at Dampier Downs (Fig. 1a). The most equatorward location, Mathison Station, also measured five burst events, with most burst days in January 2018 associated with intense convective activity over the north of the Northern Territory (Top End; Fig. 1b), stemming from Tropical Cyclone Joyce and a slow-moving tropical low.1 Across the Northern Territory border into Queensland, Gregory Downs experienced three clear burst events (late November and early March), with each burst peak associated with a broadscale convective deviation (Fig. 1c). The northeast Queensland regional center of Charters Towers saw its earliest burst event in mid-October, one short event in January, and considerable burst activity in late February. These examples highlight both the spatial and temporal diversity in the observed burst activity across northern Australia and why an arbitrary one-size-fits-all definition may not be appropriate for all regions.

Fig. 1.
Fig. 1.

Observed outgoing longwave radiation (OLR; the top part of the panels) averaged over sections of northern Australia and daily rainfall, 3-day accumulations, and burst days (the bottom part of the panels) at four cattle station regions (dots in maps): (a) Dampier Downs, (b) Mathison Stations, (c) Gregory Downs, and (d) Charters Towers. The burst definition is 30 mm in 3 days (B303d) with a minimum of 0.2 mm day−1 on a burst day. The horizontal thick line represents the 30-mm threshold, in the bottom part of each panel. Yellow shading in bottom part of panels represents burst events. The OLR is averaged over 10°–20°S and 119°–129°E in (a), 127°–137°E in (b), 135°–145°E in (c), and 142°–152°E in (d) (yellow boxes in maps), with the dashed line the climatology for 1980–2012. Gold and purple colors indicate daily OLR anomalies above and below the climatology, representing reduced and enhanced convective activity, respectively.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Similar to the date at which 50 mm of rainfall accumulates after 1 September (Drosdowsky and Wheeler 2014), there are regional differences for when the first wet season burst usually occurs, determined by the median of all seasons over the period 1960/61–2018/19. The western Top End (north of 15°S, west of 134°E), the coastal strip around the city of Cairns (18°S, 146°E) and southeast Queensland typically experience their first burst day in late November (Fig. 2a). For southeast Queensland which often experiences winter rainfall (Drosdowsky and Wheeler 2014), the timing of the first wet season burst carries less weight than for the northern tropics. Nearly 400 km south-southeast of Darwin, Mathison Station often experiences their first burst in early December, about one month prior to the three southern tropical cattle stations.

Fig. 2.
Fig. 2.

Observed median (left) first and (right) last burst day of the wet season (October–April inclusive) for (a),(d) all years from 1960/61 to 2017/18; (b),(e) El Niño years; and (c),(f) La Niña years. A burst event here is defined as the accumulation of 30 mm of rainfall in 3 days (B303d). El Niño (La Niña) years are when the 3-month running mean anomalies of Niño-3.4 SSTs (5°S–5°N, 150°–90°W; ONI) are 0.5°C higher (lower) than a sliding 30-yr period for four consecutive 3-month periods (October–December, November–January, December–February, and January–March). Medium gray shading represents regions that do not have a burst event in the majority of years. Dark gray shading represents regions where weather station density is insufficient for calculating bursts. The open circles show the locations of the four cattle stations from Fig. 1.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Previous studies have shown that ENSO strongly influences early wet season precipitation and monsoon activity (Cowan et al. 2020; Lo et al. 2007), which in turn dictates when new season pasture is available for cattle (Balston and English 2009). El Niño events are associated with later first bursts, with the area of northern Australia that receives its first burst day by the start of January contracting from 29.8% (for all years) to 24.7% for El Niño years (Fig. 2b). However, for La Niña seasons, 41.1% of northern Australia experiences their first burst by early January (Fig. 2c), consistent with the greater early season rainfall (Cowan et al. 2020) and the general asymmetric influence of ENSO events on Australian rainfall (Cai et al. 2010; King et al. 2013). The spatial pattern of the last burst day of the wet season is quite zonally oriented, with April cessations across the northern Top End and the eastern coastline (Fig. 2d). The semiarid regions south of 18°S typically experience late February cutoffs, indicative of how short the wet season is. In response to El Niño events, burst activity tends to cease earlier in the wet season (Fig. 2e), whereas La Niña events are associated with an extension of burst activity beyond March for most of the tropical north (Fig. 2f). Almost 2.5 times more of northern Australia experiences their last burst after 1 April in La Niña years compared with El Niño years.

b. Localized interannual observed burst variability and ENSO

We next assessed how the choice of maximum threshold affects the determined number of observed burst days (i.e., “burst activity”) for weeks 1–4 of December–March at Dampier Downs and Gregory Downs (Fig. 3). Alongside B303d, we tested three other thresholds: 20 mm (B203d)—more applicable to semiarid regions, 50 mm (B503d), and 70 mm in 3 days (B703d)—suited to the far north tropics. To reiterate, the analysis of observations was undertaken for period from October 1960 to April 2018. Linear trend analysis for B303d is also shown in each panel to highlight the multidecadal changes in burst activity.

Fig. 3.
Fig. 3.

Sensitivity of the burst day count to burst definition for the period from 1960/61 to 2017/18 at cattle stations (left) Gregory Downs and (right) Dampier Downs, for weeks 1–4 of (a),(e) December; (b),(f) January; (c),(g) February; and (d),(h) March. El Niño and La Niña years, defined in the methods, are shown as light pink and cyan bars, respectively. Linear regression lines for “30 mm in 3 days” bursts (B303d), as well the slopes (days yr−1) and p values are shown in each panel. Only the December trend in B303d at Dampier Downs is statistically significant (p < 0.05).

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Using our standard B303d definition, there are 30 Decembers with at least two burst days at Gregory Downs (Fig. 3a), while lowering the threshold to 20 mm (B203d) leads to 38 Decembers with at least two burst days, with the most burst activity in the year 2000 (∼19 burst days). Raising the threshold to B503d and B703d reduces the frequency of December burst activity to 10 and 8 years, respectively, with more than half the bursts occurring in El Niño years. Bursts at Gregory Downs are most prominent in January and February (Figs. 3b,c), yet there is no clear dominance in burst activity periods based on El Niño and La Niña separation for all thresholds. However, in March, there is slightly more burst activity (∼2–7 additional years) across the 20–50-mm thresholds during La Niña compared to El Niño (Fig. 3d). Throughout all four months, although there has been an increasing trend in B303d, none of the trends are statistically significant.

For Dampier Downs (northwest Australia), December bursts were infrequent until the mid-1990s (Fig. 3e), reflecting the increasing summer precipitation trend over northwest Australia since the 1950s (Dey et al. 2019) and trend toward earlier northern monsoon rain onsets (Drosdowsky and Wheeler 2014). Trend analysis suggests a statistically significant increase in B303d bursts of 0.054 days yr−1. Burst activity ramps up in January, where between 34% (for B703d) and 86% of years (for B203d) experience more than two burst days (Fig. 3f). Since the 1990s, there has been an noticeable lack of January burst activity with no more than 15 burst days per month in any year. While there has been a significant increase in burst days in December, further analysis of burst days in January finds the change to be insignificant (e.g., B303d: −0.003 days yr−1), and due to a significant decrease in the length of consecutive burst days, partially offset by a significant increase in the number of burst events (not shown). February is Dampier Downs’ most reliable burst month with around 92% of all years experiencing at least one B203d burst event (Fig. 3g). Even by March, B203d bursts occur in 63% of all years, with more than twice the number of B503d and B703d bursts under La Niña conditions compared to El Niño (Fig. 3h). In contrast to December, there has only been small insignificant changes in B303d burst activity (and other thresholds) in February and March. Despite its distance to the equatorial Pacific, Dampier Downs experiences higher December and March burst activity in La Niña years than for El Niño, whereas ENSO appears to make little difference to burst activity during January and February.

c. Biases in ACCESS-S1 hindcast burst metrics

WBefore an analysis of the B303d burst prediction skill in ACCESS-S1, we first assessed the notable biases related to burst metrics in the calibrated hindcasts. This was achieved by calculating the difference between the hindcast ensemble December–March mean (i.e., mean of 23 years × 4 months × 11 members) and the observations across various B303d burst metrics, including number (frequency) of burst days, average burst duration, and total amount of burst rainfall. Here we specifically focused on the biases in the first four weeks of December–January–February–March (DJFM), combining the first start dates for each month. The DJFM biases in the total number of burst days and average burst duration (i.e., the ratio of the total number of burst days to the number of burst events), respectively, are shown in Figs. 4a,b. For the frequency, small positive biases are located in the central Gulf of Carpentaria region, at the junction between northeast and northwest Australia, while negative biases in the order of −0.8 days are seen along the northeast coastline including Cape York (Fig. 4a). In general, ACCESS-S1 produces longer burst events on average compared to observations, with the largest biases exceeding 1 day over the far northern tropics (Fig. 4b). The semiarid rangelands and desert regions south of 20°S and west of 145°E feature few climatological burst days, and hence biases remain small. Biases in burst frequency and duration are broadly consistent across the individual summer months (see appendix A, Figs. A1, A2).

Fig. 4.
Fig. 4.

Mean DJFM hindcast (first start dates) biases in B303d metrics for weeks 1–4 over 1990–2012, including (a) mean total number of burst days, (b) average duration of bursts, (c) total amount of burst rainfall, and (d) rainfall not from bursts. The bias is the difference between the calibrated hindcast mean and observed (AWAP) mean. Gray shading represents regions where weather station density is insufficient for the calculation of bursts. A 1–2–1 spatial filter is applied 10 times to reduce the spatial noisiness of the data. The regions of northwest Australia (120°–138°E, 20°–11°S) and northeast Australia (138°–150°E, 20°–11°S) are outlined in (a).

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Like the heterogeneous bias pattern in the number of burst days, dry biases in the total amount of rainfall from bursts are simulated along the northeast coast and the central northwest region (Fig. 4c, and appendix A, Fig. A3). Wet biases over the far north Top End, the central Gulf region, and down into southeast Queensland, reflect the positive biases in burst frequency. It might be expected that more frequent bursts should generate expansive higher burst rainfall totals, consistent with ACCESS-S1 ensemble having an overall wet bias (King et al. 2020). Yet the dry biases are seen in the same regions where burst frequency is underestimated, which is consistent with ACCESS-S1 underpredicting the number of wet days above 1 mm day−1 (King et al. 2020). For nonburst related rainfall, ACCESS-S1 produces a spatially homogeneous mean summer dry bias widely exceeding 4 mm across most of northern Australia (Fig. 4d, and appendix A, Fig. A4). Even adding in the 5 mm day−1 minimum threshold on any day, the model continues to exhibit widespread dry bias (figure not shown).

To summarize the summer burst biases when compared to AWAP observations, ACCESS-S1 produces longer burst events on average and where hindcasts produce a dry (or wet) bias they tend to also simulate too few (or many) burst days. For rainfall unrelated to burst activity, dry biases dominate throughout northern Australia. Thus, it appears that the calibration on daily time scales is partially correcting the model’s light and heavy rain distributions (shape and mean) that contribute to bursts. However, in doing so the model is overcorrecting its total wet bias in its uncalibrated hindcasts toward a slight dry bias. The overcorrection may be because the calibration treats each day independently and does not conserve rain amounts across multiple days.

d. Burst prediction skill in ACCESS-S1 hindcasts

It is unclear whether the positive biases in summer burst activity over the tropical north (between 20° and 10°S) are systemic throughout the four-week hindcast period. To test this, we compared the temporal evolution of the observed spatial average of burst day frequency over northwest and northeast Australia (regions outlined in Fig. 4a) with the hindcast ensemble mean, median and range for each January selected because its large burst frequency biases (appendix A1). The evaluation was split into weeks 1 and 2 (lead time 0), weeks 2 and 3 (lead time 1), and weeks 3 and 4 (lead time 2).

1) Fortnightly assessment

For northwest Australia, in weeks 1 and 2 of January, the hindcasts overestimate the number of burst days in ∼70% of years during the hindcast period (Fig. 5a). The relationship between the hindcast and observed medians strengthens from weeks 1 and 2 (R = 0.62) through to weeks 2 and 3 (R = 0.69; Fig. 5b). Likewise, a reduction in the root-mean-square error (RMSE; the mean magnitude of the hindcast error) signifies improved prediction skill into weeks 2 and 3. The increase in skill is only apparent in January as there is reduced skill (based on RMSE and/or correlations) in weeks 2 and 3 for December, February, and March (Fig. 6a) and a further decline in skill in weeks 3 and 4 (R = 0.52; Fig. 5c). For northeast Australia, the skill in predicting the number of burst days for lead time 0 (R = 0.81, RMSE = 2.14 days) is far superior to that for the northwest (R = 0.62, RMSE = 2.40 days), because only 30% of observations lie outside the hindcast range (Fig. 5d). The skill drops away in lead time 1 (R = 0.58, RMSE = 2.49 days) and 2 (R = 0.40, RMSE = 2.71 days) as the hindcast range increases to 6.3 and 6.7 days, respectively, from 4.7 days (for lead time 0; Figs. 5e,f). In contrast, the average hindcast range in the lead fortnight 2 for the northwest is 5 days, and the correlation is improved compared to the northeast.

Fig. 5.
Fig. 5.

Observed and ACCESS-S1 hindcast total number of B303d burst days, averaged over (a)–(c) northwest Australia and (d)–(f) northeast Australia, for all Januarys over 1990–2012 for (a),(d) weeks 1 and 2; (b),(e) weeks 2 and 3; and (c),(f) weeks 3 and 4. Correlation coefficients (Rmed) and root-mean-square error (RMSEmed) come from the ensemble hindcast median. The hindcast ensemble mean (asterisk), median (horizontal line) and 11-member range (colored bars) are compared to observations (open circles). Note, the weeks 2 and 3 totals are based on statistics calculated from the start of the month, so likely contain burst days from events that start in week 1. The same applies to weeks 3 and 4, with a chance that some days are from events that begin in week 2.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Fig. 6.
Fig. 6.

Relationship between observed and hindcast ensemble mean total number of B303d burst days over 1990–2012, averaged over (a) northwest Australia and (b) northeast Australia, for December–March across the three fortnight forecast periods. Shown are correlation (circles, left vertical axes) and root-mean-square error (RMSE; asterisks, right vertical axes; days). Vertical bars represent the range of correlations and RMSEs from the 11 individual model ensemble members. A significant correlation at the 5% level for a sample size of 23 years is ∼0.4. Weeks 1 and 2 refer to a lead time 0 forecast, weeks 2 and 3 a lead time 1 forecast, and so forth.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

A broader skill assessment of the number of burst days over northwest and northeast Australia is shown in Fig. 6, focusing on correlation and RMSE of the ensemble mean (not the mean correlation and RMSE of the ensemble members) and the respective 11-member ensemble spread based for each DJFM month (first start date). For the northwest, skill in the number of burst days drops consistently with lead time for December, from R = 0.65 in weeks 1 and 2, to R = 0.4 in weeks 3 and 4 (Fig. 6a). The hindcast skill peaks in February (R = 0.84) and remains high in March in weeks 1 and 2; however, the skill slightly weakens in weeks 2 and 3, with correlations borderline significant in weeks 3 and 4 for January–March. Consistent with the weaker correlations, the RMSE increases with the lead time (pale yellow bars in Fig. 6a), with the ensemble mean RMSEs lying at the bottom of the range. Similar temporal skill is seen for the total burst rainfall amount (see appendix B).

As with the drop in skill for January as lead time increases (Figs. 5d–f), the correlation skill in burst frequency for northeast Australia shows a general decline from lead time 0 through to lead time 2 (Fig. 6b). March is the exception, consistently showing high correlations out to weeks 3 and 4 in both burst frequency and intensity. Interestingly, there is a substantial decline in correlation skill in December and February, compared with January and March. The poor December skill likely reflects the weak observed teleconnection with large-scale climate modes in that month, and a stronger association with localized meteorological events, based on an assessment of extreme rainfall metrics (King et al. 2020). However, this is also apparent in the other summer months, consistent with a weaker relationship between climate modes like ENSO and extreme rainfall compared to mean rainfall (King et al. 2014); therefore weak teleconnections may not be wholly responsible for the drop in skill. The RMSE for the northeast generally increases through the wet season as correlations become weaker, with December and January showing the lowest RMSE values (i.e., highest skill) for lead times 0 and 1.

2) Weekly assessment

In developing a probabilistic forecast product, we need to carefully consider its potential usefulness and applicability for on-the-ground decision making by producers during the wet season, particularly in the case of flooding (Cowan et al. 2019). Here, we evaluated ACCESS-S1’s ability to forecast any burst event within the multiweek timeframe (i.e., lead weeks 0 through to 2). Skillfully forecasting the arrival of a burst event during the wet season would be of great benefit to northern producers (J. Macdonald 2020, personal communication), placing less emphasis on accurately forecasting specific burst details like duration and intensity. Here we focused on the chance of any burst event occurring within the forecast period, allowing for some leeway in the forecast of a burst event’s timing. We determined skill through the Brier score (BS) and Brier skill score (BSS) metrics, where the BS (or mean squared probability forecast error) is
BS=1Ni=1N(pioi)2,
where pi is the forecast probability of a burst event in the forecast period, oi is the observed outcome (0 = no event, 1 = event), and N is the total number of DJFM forecasts using the first of the month start dates (23 years × 4 months). The BSS describes the relative skillfulness of a prediction against a climatological forecast:
BSS=1BSBSclim,
where BS are the forecast probabilities, and BSclim is the climatological reference (P = 0.5), using a cross-validated method where the year in question is left out the climatology (Lo et al. 2007). In Fig. 7, the BSS is shown for weeks 1–4 (lead time 0–3), with regional averages shown in Fig. 8a. For week 1, the hindcast skill is particularly strong across much of northern Australia (Fig. 7a), and aside for the Kimberley region of northwest Australia, average skill scores are at or above 25% (Fig. 8a). The drop in skill is quite dramatic into week 2 for the Central North region which falls below 8% while Cape York and the Top End drop to around 13% and the Kimberley maintains 16%–17%. The drop in skill in week 3 is less dramatic than for week 2 with the northwest and northeast showing scores of 7.1% and 9.5%, respectively (Fig. 7c). Into week 4, all regions except for the Central North show a small improvement over climatology, with the Top End and Cape York above 6% (Figs. 7d and 8a). From this analysis, if we consider a BSS > +10% as being statistically significant (Lim et al. 2011), ACCESS-S1 has demonstrated reasonable skill for Cape York and the Top End out to week 3 (lead time 2), the Kimberley out to week 2 (lead time 1), and the central northern region only to week 1 (lead time 0). For the whole of northern Australia, ACCESS-S1 has reasonable skill out to week 2.
Fig. 7.
Fig. 7.

Brier skill scores for the prediction of hindcast B303d burst events for (a) week 1, (b) week 2, (c) week 3, and (d) week 4 for the first of month start dates combined for December, January, February, and March. A burst event is determined when at least two burst days are forecast within the week. Black hatching cover regions with less than 10% burst activity across the sample size of 92 (23 years × 4 months). Average skill scores over northwest Australia (NWA) and northeast Australia (NEA), as marked in (a), are shown above each panel. Week 1 refers to a lead time 0 forecast, week 2 refers to a lead time 1 forecast, and so forth.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Fig. 8.
Fig. 8.

Brier skill scores for the prediction of hindcast B303d burst events for weeks 1, 2, 3, and 4, averaged over northern Australia and four subregions, for (a) all hindcasts, (b) hindcasts where the ensemble mean predicts strong MJO amplitudes for the week, and (c) hindcasts where the ensemble mean predicts weak MJO amplitudes for the week. Shown are hindcasts with a first of the month start date for DJFM. The sample sizes for each week are listed along the horizontal axes. Definition details for strong and weak MJO amplitudes are in the main text. Along the horizontal axis, week 1 refers to a lead time 0 forecast, week 2 refers to a lead time1 forecast, and so forth.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

e. The influence of the MJO on hindcast burst skill

The MJO is one of the main drivers of intraseasonal rainfall variability during northern Australia’s wet season, with a greater probability of wet conditions across northern Australia during active MJO phases 4–7, and dry conditions during the suppressed MJO phases 8, 1–3 (Marshall et al. 2021; Wheeler et al. 2009). While ACCESS-S1’s skill in predicting rainfall extremes in the wet season falls away beyond one week (King et al. 2020), it skillfully predicts the MJO out to about 28 days in summer (Marshall et al. 2021; Marshall and Hendon 2019), and captures the MJO-induced variations in tropical cyclone activity (Camp et al. 2018). For each first start date per month for DJFM over 1990–2012, we determined whether the ensemble mean MJO amplitudes for weeks 1, 2, 3, and 4 were strong or weak (see section 2b). The BSS was calculated for these hindcast weeks, and then averaged over northern Australia and the four subregions (as in Fig. 8a). It is worth noting that the hindcast MJO amplitude in summer has been shown to be considerably weaker (10%–20%) than the observed amplitude after ∼10 days (Marshall et al. 2021), which may influence skill from lead time 1 onward.

As is clear from the week 1 prediction (lead time 0) of a strong MJO amplitude, the predictive skill for burst activity is enhanced for the Top End only (red line, Fig. 8), with BSS of ∼30% above climatology (MJO strong) compared to 12% for a weak MJO prediction (Figs. 8b,c). For all other regions, the week 1 skill in predicting burst activity is either the same or marginally enhanced when hindcasts are predicting weak MJO amplitudes. Into week 2 (lead time 1), the burst skill for the Central North and Kimberley (Figs. 8b,c) is higher when the prediction is for strong MJO amplitudes. The opposite is true for the Top End and Cape York, with greater burst skill accompanying a weak MJO amplitude. Through to week 3 (lead time 2), the Top End and Cape York both stand out as being the only regions with substantially stronger skill when accompanying a weak MJO amplitude – this raises the skill level across northern Australia (thick black line, Fig. 8). Other regions either show no difference between MJO amplitudes or greater skill with a strong MJO. As burst skill declines through to week 4 (lead time 3), aside from the Top End, all regions show better burst skill scores accompanying weak MJO amplitudes. A lack of clear separation in burst predictive skill between MJO phases for most regions confirms the lack of extreme weekly rain predictability over central northern Australia when the MJO is strong (Marshall et al. 2021), possibly implying an issue with MJO–summer rainfall teleconnections.

It is possible that individual summer months are degrading the hindcast skill, as shown for extreme rainfall in December (King et al. 2020). Another possible issue is that sample sizes are too small to make robust interpretations. To overcome the issues of the effect of individual months on skill and small sample sizes, we combined the four hindcast start dates (1st, 9th, 17th, 25th). This allowed us to reassess the influence of predicted MJO amplitude strength on the burst forecast skill for the two regions that have a strong observed MJO–rainfall teleconnection: the Top End and Cape York (Wheeler et al. 2009). Combining multiple start dates for December produced week 1 output for 1–7, 9–15, 17–23, and 25–31 December. For week 2, this was a combination of 8–14, 16–22, 24–30 December and 26 December–1 January.

The results for the Top End for December and January start dates shows that the burst skill accompanying a strong MJO amplitude prediction is generally greater than for a weak MJO forecast (red lines; Figs. 9a,b). The difference is greatest in weeks 2 and 3 for December, and weeks 1 and 2 for January. Into February, there is no clear skill improvement based on the MJO forecast through to week 2, while in week 3, the skill associated with a weak MJO forecast (∼20%) far outperforms that for a strong MJO forecast which is no better than climatology (Fig. 9c). Through March, there is little separating skill scores based on MJO amplitude in the first two weeks (Fig. 9d). Averaged over far northeast in Cape York, aside from December, the week 1 skill from January to March is greater for a weak MJO compared to a strong MJO amplitude forecast (gray lines; Fig. 9). In January, the burst skill for a strong MJO amplitude drops rapidly and remains at or below the climatological forecast (0% skill) in week 2–3, while the skill with a weak MJO amplitude remains at ∼20% (Fig. 9b). Although little separates the burst skill in February (Fig. 9c), in March, the burst skill gap between MJO weak and strong amplitude forecasts remains constant throughout the forecast periods (Fig. 9d).

Fig. 9.
Fig. 9.

Brier skill scores for the prediction of hindcast B303d burst events for weeks 1, 2, 3, and 4, averaged over the Top End (red lines) and Cape York (gray lines), for strong MJO amplitudes (thick lines) and weak MJO amplitudes (dashed lines), in (a) December, (b) January, (c) February, and (d) March. Shown are all hindcast start dates of the month over 1990–2012. Along the horizontal axis, week 1 refers to a lead time 0 forecast, week 2 refers to a lead time 1 forecast, and so forth.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

The above results suggest that any skill in burst event prediction that arises due to the MJO amplitude is highly regionally dependent, confirming the Marshall et al. (2021) analysis, who found that Cape York is one region where ACCESS-S1 shows reduced skill in predicting extreme weekly rainfall associated with a strong MJO. This could be related to the lack of eastward propagation of MJO-induced rainfall over the western Pacific in ACCESS-S1 during summer (Marshall and Hendon 2019), inhibiting the correct rainfall response over the far northeast of Australia. Other forecast systems have shown deficiencies in the MJO’s eastward propagation over the Maritime Continent (Jones et al. 2015). It is also possible that ACCESS-S1 does not properly capture the local convective phase of the MJO over the northern Australia, which may stem from moisture convergence and sea breeze biases along the northern coastlines (Hawcroft et al. 2021). This lack of predictive skill for rainfall associated with an active MJO phases is also limited over the western United States in other dynamical forecast systems, particularly for hindcasts initialized in MJO phases 3–4 (Pan et al. 2019). For northern Australia, two-thirds of all bursts have a midlatitude influence and are only weakly related to the MJO (Narsey et al. 2017). The extent to which ACCESS-S1 can capture the midlatitude influence in bursts is yet to be determined.

f. Real-time example of burst potential forecasts for early 2021 from ACCESS-S1

Here we showcase a real-time forecast product showing the likelihood of a burst event, called “burst potential,” in the three weeks from a late January 2021 forecast. Introduced for the 2020/21 northern wet season, the burst potential prototype forecast product is from ACCESS-S1, and part of its suite of prototype products on the bureau’s Forecast Viewing Tool (de Burgh-Day et al. 2020). The development of the burst potential follows on from the successful release of the northern rainfall onset forecast product from ACCESS-S1 for the 2019/2020 wet season, replacing the forecast maps produced by the bureau’s older-generation model (Cowan et al. 2020). The burst potential example shown here is for a 30 January 2021 model initialization when there was strong MJO activity in phase 6, a phase that typically shows a strong interaction with northern Australian summer rainfall (Wheeler et al. 2009). From initialization, the ACCESS-S1 ensemble mean of 99 members (description of which can be found in Australian Bureau of Meteorology 2019) shows a prediction of an active MJO progressing eastward into phase 7 (western Pacific) and remaining there until 19 February (Fig. 10a). Climatologically in observations, as the MJO traverses eastward into the western Pacific, the region of enhanced rainfall probabilities moves away from the northwest of Australia and the Top End to the far northeast and Cape York (Wheeler et al. 2009). Figures 10b–d shows the lead time 0–2-week forecast of the probability of a B303d burst event occurring within the forecast period, where darker purple shades represent higher likelihoods. A direct comparison can be made to the observed climatological probability of a B303d burst event occurring in each separate week, based on all years from 1960 to 2018 (appendix C). These burst potential maps display the percentage of 99 ensemble members that forecast a burst event to commence within the forecast period. Given this product describes an accumulation over 3 days, a burst can extend across two forecast periods, meaning a predicted burst in week 2 may form part of a yesterweek burst event. The prototype forecast maps, which include B203d, B503d, and B703d maps for lead weeks 0–2 and lead fortnights 0–2, are produced every day and are available to key stakeholders from northern Australia (e.g., beef producers).

Fig. 10.
Fig. 10.

(a) MJO forecast initialized on 30 Jan 2021 showing 33 ensemble member forecasts out to 30 days (colored lines) and the ensemble mean (thick black line). (b) Probability of a B303d burst event for week 1 (lead time 0): 30 Jan–5 Feb 2021 from a 99-member lagged forecast ensemble. (c),(d) As in (b), but for week 2 (lead time 1) and week 3 (lead time 2). The forecast maps are snapshots of the actual prototype forecast product from the bureau’s Forecast Viewing Tool, with the product’s visual design set to match other prototype products. In these maps, the burst potential product is called burst event potential.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

As the lead time extends from 0 to 2, the region with the greatest burst event probability shifts from the northwest to the far northeast, in line with the MJO forecast. Based on the observed climatological probabilities of an event occurrence, there are only small changes in the expected likelihood across the three weeks (appendix C). For lead time 0 (30 January–5 February), forecast probabilities above 75% stretch from the Pilbara in the far west all the way to the northeast (Fig. 10b). These predictions are well above the observed climatological probability for that week of the year (appendix Fig. C1). Moving ahead one week (6–12 February), with the predicted MJO well into phase 7, the strong potential for a burst event shifts toward the northeast (Fig. 10c). As the predicted MJO slightly weakens and moves into phase 8 in lead week 2 (13–19 February), the forecast of burst activity tapers off toward the observed climatology (appendix Fig. C3), with only a high event probability of an event in Cape York and an isolated region of the Top End (Fig. 10d). Early indications from the 2020/21 wet season are that forecasting burst events beyond lead week 1 in semiarid regions where the forecast skill is climatologically low will continue to be challenging because of the rare nature of extreme rain events. It is also yet to be determined whether the MJO has some control over simulated burst amounts as observed (e.g., Berry and Reeder 2016).

4. Discussion and conclusions

In this study, we have presented a simplified and more practical (for end-users) definition of a summer burst event—the accumulation of 30 mm in 3 days—to investigate burst activity in observations and the bureau’s current operational multiweek to seasonal prediction system, ACCESS-S1. We first showed that observed bursts are generally associated with broad-scale convective events, with the first onset of a burst event typically occurring over the far north and east of Australia in late November. As might be expected, La Niña conditions push the burst season well into late April over the far northern coasts, while further inland, burst activity typically ceases by late March. Based on our threshold-based rainfall definition, bursts peak in January and February across northern Australia, following the seasonal cycle in rainfall, slightly later than the Berry and Reeder (2016) defined burst events. In general, observed bursts across northern Australia are somewhat independent of the state of ENSO (Berry and Reeder 2016), and our results confirm this for the peak summer months, January and February.

Using calibrated hindcasts of summer (first of the month start dates; December–March) rainfall, we have shown that ACCESS-S1 is skillful in predicting a burst event, out to a lead time of at least two weeks for the Cape York, Kimberley, and Top End subregions. The only subregion with poor skill is the Central North subregion. The prediction skill for the fortnight periods closely matches the week-to-week evolution (appendix B). The skill of ACCESS-S1 is overshadowed by the hindcast biases, whereby regions that experience more burst days than in observations tend to also experience more burst rain and vice versa (e.g., Fig. 4). Similar biases in burst activity have been uncovered in coupled climate models, with a tendency for models to produce excess summer rainfall over northern Australia (Narsey et al. 2018). Calibrating the raw hindcast rainfall to observations significantly reduces the biases in burst frequency, duration, and total burst rainfall; however, calibration also leads to an overcorrection, contributing to a greater percentage of summer rain from bursts than observed, particularly in December and January (see appendix A). Given the primary interest is ACCESS-S1’s ability to predict burst events skillfully, calibration is a necessary part of maintaining the consistency and integrity of the forecasts; however, model biases in the depiction of the MJO and convective processes that initiate bursts still require further improvement (e.g., Marshall et al. 2021). Observational studies have determined that two thirds of bursts that occur during northern Australia’s wet season (October–April) have a midlatitude influence (Narsey et al. 2017), often stemming from extratropical Rossby waves in the Indian Ocean (Berry and Reeder 2016). To what extent ACCESS-S1 can simulate the majority split between midlatitude and tropically generated bursts remains to be verified. Furthermore, climate models show difficulty in capturing the association between the MJO and bursts over northern Australia (Narsey et al. 2018), which appears consistent with the forecast performance of ACCESS-S1 for summer with respect to heavy rainfall (King et al. 2020; Marshall et al. 2021).

A strong motivation of this study was to provide a skill assessment of bursts in ACCESS-S1 ahead of a possible operationalization of the burst product. In early July 2021, the bureau’s prototype burst product was officially selected as an operational product (D. Hudson 2021, personal communication). The selection should pave the way for the development of other multiweek and seasonal rainfall or multivariate-based products, particularly targeted to the agricultural and livestock sectors, and provide useful information to end-users beyond the timeframe of a deterministic forecast. Initial feedback from northern producers on the burst potential indicated it could be integrated into the practical management decisions related to pasture growth, weed spraying, road crossings (E. Hinds 2020, personal communication) or cutting hay (D. Rea 2020, personal communication). As such, the definition for the final burst potential product needs to be simple and fit for purpose, and modified to suit different regions with vastly different soil types or in vastly different climatic zones (i.e., temperate versus tropical versus semiarid). A definition which uses absolute rainfall values rather than relative values has greater utility for producers since it is relatable to direct experience (Balston and English 2009). Some limitations to using an “absolute burst threshold over 3-day” definition is that it does not capture near-events that almost reach the total (e.g., 19 mm over 3 days), but which may be important to pastoralists during dry periods. Also as there is little effect on pasture growth during the wet season for heavy rainfall compared to moderate rainfall (Brown et al. 2019), a burst product may be more appropriate for preventative logistical planning (i.e., moving cattle away from possible flood zones). Also, in a practical sense, there may be some latency in the uptake of such a product like the burst potential by some producers, due to past dependencies on other rainfall-related rules that help with management decisions (Balston and English 2009), or simply because of the information overload to producers (e.g., Mclntosh et al. 2017). Therefore, it is important that further discussions and feedback take place with the bureau’s major agricultural clients, including those from southern regions, before a final operational version of the burst potential product is made available.

In this study, we have shown that the bureau’s current multiweek to seasonal prediction system, ACCESS-S1, has good prediction skill for summer rainfall bursts out to a lead week 2 time. The model’s prediction is most skillful over the far tropical north regions of the Top End and Cape York that typically experience bursts from late November/early December through to late April. Further work is required to fully reconcile the influence of the MJO on predicted rainfall bursts, and to see if prediction skill has improved in the next model version, ACCESS-S2. It is anticipated that the upgrade of the bureau’s operational seasonal prediction system to ACCESS-S2 should lead to improvements in forecast skill of rare weather events beyond seven days, based on several model improvements including a new coupled assimilation scheme and an updated soil moisture initialization. Research is also ongoing in the Met Office to improve the convection scheme that will be incorporated into later versions of ACCESS-S. Other areas of improvements more generally across the Unified Model systems include fixing biases concerning sea breezes and the timing of convectively induced rainfall (which then creates errors in the sea breezes; Birch et al. 2015). The expectation is that by uncovering new insights into the dynamics of what initiates and maintains bursts over northern Australia, this will lead to improved accuracy in burst prediction and greater confidence (and potentially uptake) in the bureau’s multiweek forecasting products.

Acknowledgments

This work is funded by Meat and Livestock Australia, the Queensland Government through the Drought and Climate Adaptation Program, and the University of Southern Queensland through the Northern Australia Climate Program (NACP). Catherine de Burgh-Day’s contribution is part of the Forewarned is Forearmed project, which is supported by funding from the Australian Government Department of Agriculture, Water and the Environment as part of its Rural R&D for Profit programme. We thank the multiweek and seasonal applications team and the coupled modelling team at the bureau, including Robin Wedd, Griffith Young, Hailin Yan, Morwenna Griffiths, and Debra Hudson. Special thanks go to Hongyan Zhu, Eun-Pa Lim, and David Jones for initial draft feedback. This research was undertaken with the assistance of resources from the National Computational Infrastructure Australia, a National Collaborative Research Infrastructure Strategy enabled capability supported by the Australian Government. We want to extend our strong appreciation to the two anonymous reviewers for their detailed assessment and feedback.

Data availability statement.

All observational and reanalysis data are publicly available. Hindcast data from ACCESS-S1 are available from TC upon reasonable request. The data in this study were analyzed and plotted using with NCAR Command Language V6.6.2 (www.ncl.ucar.edu).

APPENDIX A

Burst Metric Biases in ACCESS-S1 across the Wet Season Months

To confirm burst metric biases seen in ACCESS-S1 are consistent across the individual austral summer months, we calculated the week 1–4 biases for December, January, February, and March, separately. Shown for B303d bursts, are frequency (Fig. A1), duration (Fig. A2), total burst rainfall (Fig. A3), rainfall unrelated to bursts (Fig. A4), percentage of total rainfall from bursts (Fig. A5), and average daily intensity of bursts (Fig. A6). The results suggest burst day frequency biases are greatest in December and January (Fig. A1). Burst duration biases peak in January to February with anomalies greater than 1 day across the northwest (Fig. A2). The wet bias in burst rainfall is prominent over northwest Australia from December and January, whereas a dry bias is seen over the northeast in February (Fig. A3). The positive bias in burst rain over the northwest may reflect an overall broader systematic wet bias in ACCESS-S1 (Hudson et al. 2017; King et al. 2020). Despite this, a dry model bias in nonburst rainfall persists across the summer months (Fig. A4), contributing to widespread positive biases in the proportion of total rainfall from bursts (Fig. A5). The average daily intensity of rainfall from bursts (i.e., the total average amount of precipitation from each burst day) is generally underpredicted in ACCESS-S1 across much of the tropical north (Fig. A6), in agreement with the negative model biases in maximum 1-day rainfall extremes (King et al. 2020).

Fig. A1.
Fig. A1.

Hindcast bias in the mean total number of B303d burst days for weeks 1–4 over 1990–2012 for (a) December, (b) January, (c) February, and (d) March. The bias is the difference between the calibrated hindcast mean and observed (AWAP) mean. Gray shading represents regions where weather station density is insufficient for the calculation of bursts. A 1–2–1 spatial filter is applied 10 times to reduce the spatial noisiness of the data.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Fig. A2.
Fig. A2.

Hindcast bias in the mean duration of B303d bursts for weeks 1–4 over 1990–2012 for (a) December, (b) January, (c) February, and (d) March. The bias is the difference between the calibrated hindcast mean and observed (AWAP) mean. Gray shading represents regions where weather station density is insufficient for the calculation of bursts. A 1–2–1 spatial filter is applied 10 times to reduce the spatial noisiness of the data.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Fig. A3.
Fig. A3.

Hindcast bias in the total amount of rainfall from B303d bursts for weeks 1–4 over 1990–2012 for (a) December, (b) January, (c) February, and (d) March. The bias is the difference between the calibrated hindcast mean and observed (AWAP) mean. Gray shading represents regions where weather station density is insufficient for the calculation of bursts. A 1–2–1 spatial filter is applied 10 times to reduce the spatial noisiness of the data.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Fig. A4.
Fig. A4.

Hindcast bias in the total amount of rainfall unrelated to B303d bursts for weeks 1–4 over 1990–2012 for (a) December, (b) January, (c) February, and (d) March. The bias is the difference between the calibrated hindcast mean and observed (AWAP) mean. Gray shading represents regions where weather station density is insufficient for the calculation of bursts. A 1–2–1 spatial filter is applied 10 times to reduce the spatial noisiness of the data.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Fig. A5.
Fig. A5.

Bias in the percentage of total rainfall from B303d bursts for weeks 1–4 over 1990–2012 for (a) December, (b) January, (c) February, and (d) March. Lighter gray shading indicates where there are less than 50% of bursts detected over the 23 years in the observed or simulated by ACCESS-S1 to determine a median. Darker gray shading represents regions where weather station density is insufficient for the calculation of bursts. A 1–2–1 spatial filter is applied 10 times to reduce the spatial noisiness of the data.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Fig. A6.
Fig. A6.

Bias in the average daily intensity of B303d bursts for weeks 1–4 over 1990–2012 for (a) December, (b) January, (c) February, and (d) March. The intensity is the total average amount of burst precipitation that falls on each burst day. The bias is the difference between the calibrated hindcast mean and observed (AWAP) mean. Lighter gray shading indicates where there are less than 50% of bursts detected over the 23 years in the observed or simulated by ACCESS-S1. Darker gray shading represents regions where weather station density is insufficient for the calculation of bursts. A 1–2–1 spatial filter is applied 10 times to reduce the spatial noisiness of the data.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

APPENDIX B

Burst Metric Skill in ACCESS-S1

In Fig. 6, we show an assessment of the temporal relationship between the observed and hindcast ensemble mean total number of B303d burst days. Figure B1 displays the total rainfall from B303d bursts relationship for each summer month. In general, model skill deteriorates from weeks 1 and 2 through weeks 3 and 4, more so in northeast Australia than northwest Australia.

Fig. B1.
Fig. B1.

Relationship between observed and hindcast ensemble mean total rainfall from B303d bursts, averaged over (a) northwest Australia and (b) northeast Australia, for December–March forecasts across the three forecast periods. Shown are correlation (circles, left vertical axes) and root-mean-square error (RMSE; asterisks, right vertical axes; mm). Vertical bars represent the range of correlations and RMSEs from the 11 individual model ensemble members. Significant correlations at the 5% level for a sample size of 23 years is ∼0.4. Weeks 1 and 2 refer to a lead time 0 forecast, weeks 2 and 3 refer to a lead time 1 forecast, and so forth.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

In assessing the prediction skill for a burst event to occur within the lead week and fortnight timeframes, we use the Brier skill score. The fortnight predictions can extend the skill to lead time 1 (Figs. B2a,b); however, we cannot distinguish a predicted event occurring on days 1 and 2, or on days 13 and 14. With this in mind, the prediction skill of a burst event over northeast Australia lies just above 7% improvement over climatology in weeks 3 and 4 (Fig. B2), while for the northwest Australia, the skill sits at just above 5% improvement.

Fig. B2.
Fig. B2.

Brier skill scores of hindcast burst events for (a) weeks 1 and 2, (b) weeks 2 and 3, and (c) weeks 3 and 4, for the first of the month start dates for DJFM. A burst event is determined when at least two burst days are forecast within each fortnight. Black hatching cover regions with less than 10% burst activity across 1990–2012. Weeks 1 and 2 refer to a lead time 0 forecast, weeks 2 and 3 a lead time 1 forecast, and so forth.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

APPENDIX C

Observed Burst Event Potential Climatology

The real-time analysis of burst potential forecasts for the three weeks from 30 January 2021 initialization shows how the probability of a B303d burst event over the far tropical northwest region ranges from 75%–100% in lead week 0 to less than 50% in lead week 2 (Figs. 10b–d). We can compare these forecasts to the observed climatological probability for the three weeks in question, shown in Figs. C1C3, based on all individual weeks over the period 1960–2018. That is, for the week of 30 January–5 February, we determine the percentage of the 59 observed years that have a B303d burst event for that particular week. As can be seen, the lead week 0 forecast (Fig. 10b) shows stronger probabilities than the observed climatological probabilities over the northwest (Fig. C1). For lead week 1 and over the same region (Fig. 10c), the forecast shows slightly decreased odds of a burst event than would be expected on average (Fig. C2), while in lead week 3 (Fig. 10c), the forecasts across northern Australia tend to offer no more skill than climatology (Fig. C3).

Fig. C1.
Fig. C1.

Observed climatological probability of a B303d burst event for the week of the 30 Jan–5 Feb over the period 1960–2018.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Fig. C2.
Fig. C2.

As in Fig. C1, but for the week of the 6–12 Feb.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

Fig. C3.
Fig. C3.

As in Fig. C1, but for the week of the 13–19 Feb.

Citation: Weather and Forecasting 37, 1; 10.1175/WAF-D-21-0046.1

REFERENCES

  • An-Vo, D.-A., K. Reardon-Smith, S. Mushtaq, D. Cobon, S. Kodur, and R. Stone, 2019: Value of seasonal climate forecasts in reducing economic losses for grazing enterprises: Charters Towers case study. Rangeland J., 41, 165175, https://doi.org/10.1071/RJ18004.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Australian Bureau of Meteorology, 2019: Operational Implementation of ACCESS-S1 Forecast Post-Processing. Operations Bulletin 124, 21 pp., http://www.bom.gov.au/australia/charts/bulletins/opsull-124-ext.pdf.

    • Search Google Scholar
    • Export Citation
  • Balston, J., and B. English, 2009: Defining and predicting the “break of the season” for north-east Queensland grazing areas. Rangeland J., 31, 151159, https://doi.org/10.1071/RJ08054.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Berry, G. J., and M. J. Reeder, 2016: The dynamics of Australian monsoon bursts. J. Atmos. Sci., 73, 5569, https://doi.org/10.1175/JAS-D-15-0071.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Birch, C. E., M. J. Roberts, L. Garcia-Carreras, D. Ackerley, M. J. Reeder, A. P. Lock, and R. Schiemann, 2015: Sea-breeze dynamics and convection initiation: The influence of convective parameterization in weather and climate model biases. J. Climate, 28, 80938108, https://doi.org/10.1175/JCLI-D-14-00850.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brown, J. N., A. Ash, N. MacLeod, and P. McIntosh, 2019: Diagnosing the weather and climate features that influence pasture growth in Northern Australia. Climate Risk Manage., 24, 112, https://doi.org/10.1016/j.crm.2019.01.003.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cai, W., P. van Rensch, T. Cowan, and A. Sullivan, 2010: Asymmetry in ENSO teleconnection with regional rainfall, its multidecadal variability, and impact. J. Climate, 23, 49444955, https://doi.org/10.1175/2010JCLI3501.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Camp, J., and Coauthors, 2018: Skilful multiweek tropical cyclone prediction in ACCESS-S1 and the role of the MJO. Quart. J. Roy. Meteor. Soc., 144, 13371351, https://doi.org/10.1002/qj.3260.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cobon, D. H., L. Kouadio, S. Mushtaq, C. Jarvis, J. Carter, G. Stone, and P. Davis, 2019: Evaluating the shifts in rainfall and pasture-growth variabilities across the pastoral zone of Australia during 1910–2010. Crop Pasture Sci., 70, 634647, https://doi.org/10.1071/CP18482.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cobon, D. H., R. Darbyshire, J. Crean, S. Kodur, M. Simpson, and C. Jarvis, 2020: Valuing seasonal climate forecasts in the northern Australia beef industry. Wea. Climate Soc., 12, 314, https://doi.org/10.1175/WCAS-D-19-0018.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cowan, T., and Coauthors, 2019: Forecasting the extreme rainfall, low temperatures, and strong winds associated with the northern Queensland floods of February 2019. Wea. Climate Extremes, 26, 100232, https://doi.org/10.1016/j.wace.2019.100232.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cowan, T., R. Stone, M. C. Wheeler, and M. Griffiths, 2020: Improving the seasonal prediction of Northern Australian rainfall onset to help with grazing management decisions. Climate Serv., 19, 100182, https://doi.org/10.1016/j.cliser.2020.100182.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Darbyshire, R., and Coauthors, 2020: Insights into the value of seasonal climate forecasts to agriculture. Aust. J. Agric. Resour. Econ., 64, 10341058, https://doi.org/10.1111/1467-8489.12389.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • de Burgh-Day, C., M. Griffiths, H. Yan, G. Young, D. Hudson, and O. Alves, 2020: An adaptable framework for development and real time production of experimental sub-seasonal to seasonal forecast products. Bureau Research Rep. 042, 36 pp., http://s2sprediction.net/file/documents_reports/BRR-042.pdf.

    • Search Google Scholar
    • Export Citation
  • Dey, R., S. C. Lewis, J. M. Arblaster, and N. J. Abram, 2019: A review of past and projected changes in Australia’s rainfall. Wiley Interdiscip. Rev.: Climate Change, 10, e577, https://doi.org/10.1002/wcc.577.

    • Search Google Scholar
    • Export Citation
  • Dey, R., A. J. E. Gallant, and S. C. Lewis, 2020: Evidence of a continent-wide shift of episodic rainfall in Australia. Wea. Climate Extremes, 29, 100274, https://doi.org/10.1016/j.wace.2020.100274.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Drosdowsky, W., 1996: Variability of the Australian summer monsoon at Darwin: 1957–1992. J. Climate, 9, 8596, https://doi.org/10.1175/1520-0442(1996)009<0085:VOTASM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Drosdowsky, W., and M. C. Wheeler, 2014: Predicting the onset of the North Australian wet season with the POAMA dynamical prediction system. Wea. Forecasting, 29, 150161, https://doi.org/10.1175/WAF-D-13-00091.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hawcroft, M. K., S. Lavender, D. Copsey, S. Milton, J. Rodriguez, W. Tennant, S. Webster, and T. Cowan, 2021: The benefits of ensemble prediction for forecasting an extreme event: The Queensland floods of February 2019. Mon. Wea. Rev., 149, 23912408, https://doi.org/10.1175/MWR-D-20-0330.1.

    • Search Google Scholar
    • Export Citation
  • Hendon, H. H., and B. Liebmann, 1990a: A composite study of onset of the Australian summer monsoon. J. Atmos. Sci., 47, 22272240, https://doi.org/10.1175/1520-0469(1990)047<2227:ACSOOO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hendon, H. H., and B. Liebmann, 1990b: The intraseasonal (30–50 day) oscillation of the Australian summer monsoon. J. Atmos. Sci., 47, 29092924, https://doi.org/10.1175/1520-0469(1990)047<2909:TIDOOT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, B., and Coauthors, 2017: Extended Reconstructed Sea Surface Temperature, version 5 (ERSSTv5): Upgrades, validations, and intercomparisons. J. Climate, 30, 81798205, https://doi.org/10.1175/JCLI-D-16-0836.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hudson, D., and Coauthors, 2017: ACCESS-S1 the new Bureau of Meteorology multi-week to seasonal prediction system. J. South. Hemisphere Earth Syst. Sci., 67, 132159, https://doi.org/10.1071/ES17009.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jones, C., A. Hazra, and L. M. V. Carvalho, 2015: The Madden–Julian oscillation and boreal winter forecast skill: An analysis of NCEP CFSv2 reforecasts. J. Climate, 28, 62976307, https://doi.org/10.1175/JCLI-D-15-0149.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jones, D. A., W. Wang, and R. Fawcett, 2009: High-quality spatial climate data-sets for Australia. Aust. Meteor. Oceanogr. J., 58, 233248, https://doi.org/10.22499/2.5804.003.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kerr, N., and D. Abrecht, 1992: Opportunity knocks : Sowing wheat early in the north-eastern wheatbelt. J. Dep. Agric. West. Aust., Ser. 4, 33, 3235, https://researchlibrary.agric.wa.gov.au/journal_agriculture4/vol33/iss1/13.

    • Search Google Scholar
    • Export Citation
  • King, A. D., L. V. Alexander, and M. G. Donat, 2013: Asymmetry in the response of eastern Australia extreme rainfall to low-frequency Pacific variability. Geophys. Res. Lett., 40, 22712277, https://doi.org/10.1002/grl.50427.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • King, A. D., N. P. Klingaman, L. V. Alexander, M. G. Donat, N. C. Jourdain, and P. Maher, 2014: Extreme rainfall variability in Australia: Patterns, drivers, and predictability. J. Climate, 27, 60356050, https://doi.org/10.1175/JCLI-D-13-00715.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • King, A. D., D. Hudson, E.-P. Lim, A. G. Marshall, H. H. Hendon, T. P. Lane, and O. Alves, 2020: Sub-seasonal to seasonal prediction of rainfall extremes in Australia. Quart. J. Roy. Meteor. Soc., 146, 22282249, https://doi.org/10.1002/qj.3789.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Liebmann, B., and C. A. Smith, 2006: Description of a complete (interpolated) outgoing longwave radiation dataset. Bull. Amer. Meteor. Soc., 77, 12751277.

    • Search Google Scholar
    • Export Citation
  • Lim, E.-P., H. H. Hendon, D. L. T. Anderson, A. Charles, and O. Alves, 2011: Dynamical, statistical–dynamical, and multimodel ensemble forecasts of Australian spring season rainfall. Mon. Wea. Rev., 139, 958975, https://doi.org/10.1175/2010MWR3399.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lisonbee, J., J. Ribbe, and M. Wheeler, 2019: Defining the north Australian monsoon onset: A systematic review. Prog. Phys. Geogr., 44, 398418, https://doi.org/10.1177/0309133319881107.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lo, F., M. C. Wheeler, H. Meinke, and A. Donald, 2007: Probabilistic forecasts of the onset of the north Australian wet season. Mon. Wea. Rev., 135, 35063520, https://doi.org/10.1175/MWR3473.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • MacLachlan, C., and Coauthors, 2015: Global Seasonal forecast system version 5 (GloSea5): A high-resolution seasonal forecast system. Quart. J. Roy. Meteor. Soc., 141, 10721084, https://doi.org/10.1002/qj.2396.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Marshall, A. G., and H. H. Hendon, 2015: Subseasonal prediction of Australian summer monsoon anomalies. Geophys. Res. Lett., 42, 102913102919, https://doi.org/10.1002/2015GL067086.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Marshall, A. G., and H. H. Hendon, 2019: Multi-week prediction of the Madden–Julian oscillation with ACCESS-S1. Climate Dyn., 52, 25132528, https://doi.org/10.1007/s00382-018-4272-6.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Marshall, A. G., H. H. Hendon, and D. Hudson, 2021: Influence of the Madden–Julian Oscillation on multiweek prediction of Australian rainfall extremes using the ACCESS-S1 prediction system. J. South. Hemisphere Earth Syst. Sci., 71, 159180, https://doi.org/10.1071/ES21001.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • McIntosh, P., C. Jakob, D. Karoly, and A. Pitman, 2017: Managing Climate Variability Program V: A research and development Operational Plan for 2016–17 to 2021–22. Meat and Livestock Australia Limited, 45 pp., https://www.mla.com.au/contentassets/515fa29953074efcb80b37247ab6a10d/b.cch.2105_final_report.pdf.

    • Search Google Scholar
    • Export Citation
  • Megann, A., and Coauthors, 2014: GO5.0: The joint NERC–Met Office NEMO global ocean model for use in coupled and forced applications. Geosci. Model Dev., 7, 10691092, https://doi.org/10.5194/gmd-7-1069-2014.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moise, A., I. Smith, J. R. Brown, R. Colman, and S. Narsey, 2020: Observed and projected intra-seasonal variability of Australian monsoon rainfall. Int. J. Climatol., 40, 23102327, https://doi.org/10.1002/joc.6334.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Mollah, W. S., and I. M. Cook, 1996: Rainfall variability and agriculture in the semi-arid tropics—The northern territory, Australia. Agric. For. Meteor., 79, 3960, https://doi.org/10.1016/0168-1923(95)02267-8.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Moron, V., and A. W. Robertson, 2020: Tropical rainfall subseasonal-to-seasonal predictability types. npj Climate Atmos. Sci., 3, 4, https://doi.org/10.1038/s41612-020-0107-3.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Narsey, S., M. J. Reeder, D. Ackerley, and C. Jakob, 2017: A midlatitude influence on Australian monsoon bursts. J. Climate, 30, 53775393, https://doi.org/10.1175/JCLI-D-16-0686.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Narsey, S., M. J. Reeder, C. Jakob, and D. Ackerley, 2018: An evaluation of northern Australian wet season rainfall bursts in CMIP5 models. J. Climate, 31, 77897802, https://doi.org/10.1175/JCLI-D-17-0637.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Pan, B., K. Hsu, A. AghaKouchak, S. Sorooshian, and W. Higgins, 2019: Precipitation prediction skill for the west coast United States: From short to extended range. J. Climate, 32, 161182, https://doi.org/10.1175/JCLI-D-18-0355.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Sharmila, S., and H. H. Hendon, 2020: Mechanisms of multiyear variations of Northern Australia wet-season rainfall. Sci. Rep., 10, 5086, https://doi.org/10.1038/s41598-020-61482-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Troup, A. J., 1961: Variations in upper tropospheric flow associated with the onset of the Australian summer monsoon. Indian J. Meteor. Geophys., 12, 217230, http://hdl.handle.net/102.100.100/331591?index=1.

    • Search Google Scholar
    • Export Citation
  • Walters, D., and Coauthors, 2017: The Met Office Unified Model Global Atmosphere 6.0/6.1 and JULES Global Land 6.0/6.1 configurations. Geosci. Model Dev., 10, 14871520, https://doi.org/10.5194/gmd-10-1487-2017.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wheeler, M. C., and J. L. McBride, 2005: Australian-Indonesian monsoon. Intraseasonal Variability in the Atmosphere-Ocean Climate System, W. K. M. Lau and D. E. Waliser, Eds., Springer, 125173, https://doi.org/10.1007/3-540-27250-X_5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wheeler, M. C., H. H. Hendon, S. Cleland, H. Meinke, and A. Donald, 2009: Impacts of the Madden–Julian Oscillation on Australian rainfall and circulation. J. Climate, 22, 14821498, https://doi.org/10.1175/2008JCLI2595.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
1

A rainfall summary for January 2018, including TC Joyce can be viewed at http://www.bom.gov.au/climate/current/month/aus/archive/201801.summary.shtml#rainfall.

Save
  • An-Vo, D.-A., K. Reardon-Smith, S. Mushtaq, D. Cobon, S. Kodur, and R. Stone, 2019: Value of seasonal climate forecasts in reducing economic losses for grazing enterprises: Charters Towers case study. Rangeland J., 41, 165175, https://doi.org/10.1071/RJ18004.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Australian Bureau of Meteorology, 2019: Operational Implementation of ACCESS-S1 Forecast Post-Processing. Operations Bulletin 124, 21 pp., http://www.bom.gov.au/australia/charts/bulletins/opsull-124-ext.pdf.

    • Search Google Scholar
    • Export Citation
  • Balston, J., and B. English, 2009: Defining and predicting the “break of the season” for north-east Queensland grazing areas. Rangeland J., 31, 151159, https://doi.org/10.1071/RJ08054.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Berry, G. J., and M. J. Reeder, 2016: The dynamics of Australian monsoon bursts. J. Atmos. Sci., 73, 5569, https://doi.org/10.1175/JAS-D-15-0071.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Birch, C. E., M. J. Roberts, L. Garcia-Carreras, D. Ackerley, M. J. Reeder, A. P. Lock, and R. Schiemann, 2015: Sea-breeze dynamics and convection initiation: The influence of convective parameterization in weather and climate model biases. J. Climate, 28, 80938108, https://doi.org/10.1175/JCLI-D-14-00850.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Brown, J. N., A. Ash, N. MacLeod, and P. McIntosh, 2019: Diagnosing the weather and climate features that influence pasture growth in Northern Australia. Climate Risk Manage., 24, 112, https://doi.org/10.1016/j.crm.2019.01.003.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cai, W., P. van Rensch, T. Cowan, and A. Sullivan, 2010: Asymmetry in ENSO teleconnection with regional rainfall, its multidecadal variability, and impact. J. Climate, 23, 49444955, https://doi.org/10.1175/2010JCLI3501.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Camp, J., and Coauthors, 2018: Skilful multiweek tropical cyclone prediction in ACCESS-S1 and the role of the MJO. Quart. J. Roy. Meteor. Soc., 144, 13371351, https://doi.org/10.1002/qj.3260.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cobon, D. H., L. Kouadio, S. Mushtaq, C. Jarvis, J. Carter, G. Stone, and P. Davis, 2019: Evaluating the shifts in rainfall and pasture-growth variabilities across the pastoral zone of Australia during 1910–2010. Crop Pasture Sci., 70, 634647, https://doi.org/10.1071/CP18482.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cobon, D. H., R. Darbyshire, J. Crean, S. Kodur, M. Simpson, and C. Jarvis, 2020: Valuing seasonal climate forecasts in the northern Australia beef industry. Wea. Climate Soc., 12, 314, https://doi.org/10.1175/WCAS-D-19-0018.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cowan, T., and Coauthors, 2019: Forecasting the extreme rainfall, low temperatures, and strong winds associated with the northern Queensland floods of February 2019. Wea. Climate Extremes, 26, 100232, https://doi.org/10.1016/j.wace.2019.100232.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Cowan, T., R. Stone, M. C. Wheeler, and M. Griffiths, 2020: Improving the seasonal prediction of Northern Australian rainfall onset to help with grazing management decisions. Climate Serv., 19, 100182, https://doi.org/10.1016/j.cliser.2020.100182.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Darbyshire, R., and Coauthors, 2020: Insights into the value of seasonal climate forecasts to agriculture. Aust. J. Agric. Resour. Econ., 64, 10341058, https://doi.org/10.1111/1467-8489.12389.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • de Burgh-Day, C., M. Griffiths, H. Yan, G. Young, D. Hudson, and O. Alves, 2020: An adaptable framework for development and real time production of experimental sub-seasonal to seasonal forecast products. Bureau Research Rep. 042, 36 pp., http://s2sprediction.net/file/documents_reports/BRR-042.pdf.

    • Search Google Scholar
    • Export Citation
  • Dey, R., S. C. Lewis, J. M. Arblaster, and N. J. Abram, 2019: A review of past and projected changes in Australia’s rainfall. Wiley Interdiscip. Rev.: Climate Change, 10, e577, https://doi.org/10.1002/wcc.577.

    • Search Google Scholar
    • Export Citation
  • Dey, R., A. J. E. Gallant, and S. C. Lewis, 2020: Evidence of a continent-wide shift of episodic rainfall in Australia. Wea. Climate Extremes, 29, 100274, https://doi.org/10.1016/j.wace.2020.100274.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Drosdowsky, W., 1996: Variability of the Australian summer monsoon at Darwin: 1957–1992. J. Climate, 9, 8596, https://doi.org/10.1175/1520-0442(1996)009<0085:VOTASM>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Drosdowsky, W., and M. C. Wheeler, 2014: Predicting the onset of the North Australian wet season with the POAMA dynamical prediction system. Wea. Forecasting, 29, 150161, https://doi.org/10.1175/WAF-D-13-00091.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hawcroft, M. K., S. Lavender, D. Copsey, S. Milton, J. Rodriguez, W. Tennant, S. Webster, and T. Cowan, 2021: The benefits of ensemble prediction for forecasting an extreme event: The Queensland floods of February 2019. Mon. Wea. Rev., 149, 23912408, https://doi.org/10.1175/MWR-D-20-0330.1.

    • Search Google Scholar
    • Export Citation
  • Hendon, H. H., and B. Liebmann, 1990a: A composite study of onset of the Australian summer monsoon. J. Atmos. Sci., 47, 22272240, https://doi.org/10.1175/1520-0469(1990)047<2227:ACSOOO>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hendon, H. H., and B. Liebmann, 1990b: The intraseasonal (30–50 day) oscillation of the Australian summer monsoon. J. Atmos. Sci., 47, 29092924, https://doi.org/10.1175/1520-0469(1990)047<2909:TIDOOT>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Huang, B., and Coauthors, 2017: Extended Reconstructed Sea Surface Temperature, version 5 (ERSSTv5): Upgrades, validations, and intercomparisons. J. Climate, 30, 81798205, https://doi.org/10.1175/JCLI-D-16-0836.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hudson, D., and Coauthors, 2017: ACCESS-S1 the new Bureau of Meteorology multi-week to seasonal prediction system. J. South. Hemisphere Earth Syst. Sci., 67, 132159, https://doi.org/10.1071/ES17009.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jones, C., A. Hazra, and L. M. V. Carvalho, 2015: The Madden–Julian oscillation and boreal winter forecast skill: An analysis of NCEP CFSv2 reforecasts. J. Climate, 28, 62976307, https://doi.org/10.1175/JCLI-D-15-0149.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Jones, D. A., W. Wang, and R. Fawcett, 2009: High-quality spatial climate data-sets for Australia. Aust. Meteor. Oceanogr. J., 58, 233248, https://doi.org/10.22499/2.5804.003.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kerr, N., and D. Abrecht, 1992: Opportunity knocks : Sowing wheat early in the north-eastern wheatbelt. J. Dep. Agric. West. Aust., Ser. 4, 33, 3235, https://researchlibrary.agric.wa.gov.au/journal_agriculture4/vol33/iss1/13.

    • Search Google Scholar
    • Export Citation
  • King, A. D., L. V. Alexander, and M. G. Donat, 2013: Asymmetry in the response of eastern Australia extreme rainfall to low-frequency Pacific variability. Geophys. Res. Lett., 40, 22712277, https://doi.org/10.1002/grl.50427.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • King, A. D., N. P. Klingaman, L. V. Alexander, M. G. Donat, N. C. Jourdain, and P. Maher, 2014: Extreme rainfall variability in Australia: Patterns, drivers, and predictability. J. Climate, 27, 60356050, https://doi.org/10.1175/JCLI-D-13-00715.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • King, A. D., D. Hudson, E.-P. Lim, A. G. Marshall, H. H. Hendon, T. P. Lane, and O. Alves, 2020: Sub-seasonal to seasonal prediction of rainfall extremes in Australia. Quart. J. Roy. Meteor. Soc.,