Search Results
You are looking at 1 - 10 of 15 items for :
- Author or Editor: J. Anderson x
- Journal of Climate x
- Refine by Access: All Content x
Abstract
Reanalysis datasets that are produced by assimilating observations into numerical forecast models may contain unrealistic features owing to the influence of the underlying model. The authors have evaluated the potential for such errors to affect the depiction of summertime low-level jets (LLJs) in the NCEP–NCAR reanalysis by comparing the incidence of LLJs over 7 yr (1992–98) in the reanalysis to hourly observations obtained from the NOAA Wind Profiler Network. The profiler observations are not included in the reanalysis, thereby providing an independent evaluation of the ability of the reanalysis to represent LLJs.
LLJs in the NCEP–NCAR reanalysis exhibit realistic spatial structure, but strong LLJs are infrequent in the lee of the Rocky Mountains, causing substantial bias in LLJ frequency. In this region the forecast by the reanalysis model diminishes the ageostrophic wind, forcing the analysis scheme to restore the ageostrophic wind. The authors recommend sensitivity tests of LLJ simulations by GCMs in which terrain resolution and horizontal grid spacing are varied independently.
Abstract
Reanalysis datasets that are produced by assimilating observations into numerical forecast models may contain unrealistic features owing to the influence of the underlying model. The authors have evaluated the potential for such errors to affect the depiction of summertime low-level jets (LLJs) in the NCEP–NCAR reanalysis by comparing the incidence of LLJs over 7 yr (1992–98) in the reanalysis to hourly observations obtained from the NOAA Wind Profiler Network. The profiler observations are not included in the reanalysis, thereby providing an independent evaluation of the ability of the reanalysis to represent LLJs.
LLJs in the NCEP–NCAR reanalysis exhibit realistic spatial structure, but strong LLJs are infrequent in the lee of the Rocky Mountains, causing substantial bias in LLJ frequency. In this region the forecast by the reanalysis model diminishes the ageostrophic wind, forcing the analysis scheme to restore the ageostrophic wind. The authors recommend sensitivity tests of LLJ simulations by GCMs in which terrain resolution and horizontal grid spacing are varied independently.
Abstract
A significant reduction (increase) of tropical storm activity over the Atlantic basin is observed during El Niño (La Niña) events. Furthermore, the number of Atlantic tropical storms displays an interdecadal variability with more storms in the 1950s and 1960s than in the 1970s and 1980s. Ensembles of simulations with an atmospheric general circulation model (AGCM) are used to explore the mechanisms responsible for this observed variability.
The interannual variability is investigated using a 10-member ensemble of AGCM simulations forced by climatological SSTs of the 1980s everywhere except over the tropical Pacific and Indian Oceans. Significantly fewer tropical storms are simulated with El Niño SSTs imposed over the tropical Pacific and Indian Oceans than with La Niña conditions. Increased simulated vertical wind shear over the Atlantic is the most likely explanation for the reduction of simulated tropical storms during El Niño years. SST forcing from different El Niño events has distinct impacts on Atlantic tropical storms in the simulation: simulated tropical storms are significantly less numerous with 1982 SSTs imposed over the tropical Pacific and Indian Oceans than with 1986 SSTs.
The interdecadal variability of tropical storm activity seems to coincide with an interdecadal variability of the North Atlantic SSTs with colder SSTs in the 1970s than in the 1950s. Ensembles of AGCM simulations produce significantly more tropical storms when forced by observed SSTs of the 1950s than when forced by SSTs of the 1970s. This supports the theory that the interdecadal variability of SSTs has a significant impact on the expected number of Atlantic tropical storms and suggests that Atlantic tropical storms may be more numerous in coming years if North Atlantic SSTs are getting warmer. A significant increase of vertical wind shear and a significant decrease in the convective available potential energy over the tropical Atlantic in the 1970s may explain the simulated interdecadal variability of Atlantic tropical storms.
Abstract
A significant reduction (increase) of tropical storm activity over the Atlantic basin is observed during El Niño (La Niña) events. Furthermore, the number of Atlantic tropical storms displays an interdecadal variability with more storms in the 1950s and 1960s than in the 1970s and 1980s. Ensembles of simulations with an atmospheric general circulation model (AGCM) are used to explore the mechanisms responsible for this observed variability.
The interannual variability is investigated using a 10-member ensemble of AGCM simulations forced by climatological SSTs of the 1980s everywhere except over the tropical Pacific and Indian Oceans. Significantly fewer tropical storms are simulated with El Niño SSTs imposed over the tropical Pacific and Indian Oceans than with La Niña conditions. Increased simulated vertical wind shear over the Atlantic is the most likely explanation for the reduction of simulated tropical storms during El Niño years. SST forcing from different El Niño events has distinct impacts on Atlantic tropical storms in the simulation: simulated tropical storms are significantly less numerous with 1982 SSTs imposed over the tropical Pacific and Indian Oceans than with 1986 SSTs.
The interdecadal variability of tropical storm activity seems to coincide with an interdecadal variability of the North Atlantic SSTs with colder SSTs in the 1970s than in the 1950s. Ensembles of AGCM simulations produce significantly more tropical storms when forced by observed SSTs of the 1950s than when forced by SSTs of the 1970s. This supports the theory that the interdecadal variability of SSTs has a significant impact on the expected number of Atlantic tropical storms and suggests that Atlantic tropical storms may be more numerous in coming years if North Atlantic SSTs are getting warmer. A significant increase of vertical wind shear and a significant decrease in the convective available potential energy over the tropical Atlantic in the 1970s may explain the simulated interdecadal variability of Atlantic tropical storms.
Abstract
The ability of persistent midlatitude convective regions to influence hemispheric circulation patterns during the Northern Hemisphere summer is investigated. Global rainfall data over a 15-yr period indicate anomalously large July total rainfalls occurred over mesoscale-sized, midlatitude regions of North America and/or Southeast Asia during the years of 1987, 1991, 1992, and 1993. The anomalous 200-hPa vorticity patterns for these same years are suggestive of Rossby wave trains emanating from the regions of anomalous rainfall in the midlatitudes.
Results from an analysis of an 11-yr mean monthly 200-hPa July wind field indicate that, in the climatological mean, Rossby waveguides are present that could assist in developing a large-scale response from mesoscale-sized regions of persistent convection in the midlatitudes. This hypothesis is tested using a barotropic model linearized about the 200-hPa July time-mean flow and forced by the observed divergence anomalies. The model results are in qualitative agreement in the observed July vorticity anomalies for the four years investigated. Model results forced by observed tropical forcings for the same years do not demonstrate any significant influence on the midlatitude circulation. It is argued that persistent midlatitude convective regions may play a role in the development, maintenance, and dissipation of the large-scale circulations that help to support the convective regions.
Abstract
The ability of persistent midlatitude convective regions to influence hemispheric circulation patterns during the Northern Hemisphere summer is investigated. Global rainfall data over a 15-yr period indicate anomalously large July total rainfalls occurred over mesoscale-sized, midlatitude regions of North America and/or Southeast Asia during the years of 1987, 1991, 1992, and 1993. The anomalous 200-hPa vorticity patterns for these same years are suggestive of Rossby wave trains emanating from the regions of anomalous rainfall in the midlatitudes.
Results from an analysis of an 11-yr mean monthly 200-hPa July wind field indicate that, in the climatological mean, Rossby waveguides are present that could assist in developing a large-scale response from mesoscale-sized regions of persistent convection in the midlatitudes. This hypothesis is tested using a barotropic model linearized about the 200-hPa July time-mean flow and forced by the observed divergence anomalies. The model results are in qualitative agreement in the observed July vorticity anomalies for the four years investigated. Model results forced by observed tropical forcings for the same years do not demonstrate any significant influence on the midlatitude circulation. It is argued that persistent midlatitude convective regions may play a role in the development, maintenance, and dissipation of the large-scale circulations that help to support the convective regions.
Abstract
This study documents and evaluates the boundary layer and energy budget response to record low 2007 sea ice extents in the Community Atmosphere Model version 4 (CAM4) using 1-day observationally constrained forecasts and 10-yr runs with a freely evolving atmosphere. While near-surface temperature and humidity are minimally affected by sea ice loss in July 2007 forecasts, near-surface stability decreases and atmospheric humidity increases aloft over newly open water in September 2007 forecasts. Ubiquitous low cloud increases over the newly ice-free Arctic Ocean are found in both the July 2007 and the September 2007 forecasts. In response to the 2007 sea ice loss, net surface [top of the atmosphere (TOA)] energy budgets change by +19.4 W m−2 (+21.0 W m−2) and −17.9 W m−2 (+1.4 W m−2) in the July 2007 and September 2007 forecasts, respectively. While many aspects of the forecasted response to sea ice loss are consistent with physical expectations and available observations, CAM4’s ubiquitous July 2007 cloud increases over newly open water are not. The unrealistic cloud response results from the global application of parameterization designed to diagnose stratus clouds based on lower-tropospheric stability (CLDST). In the Arctic, the well-mixed boundary layer assumption implicit in CLDST is violated. Requiring a well-mixed boundary layer to diagnose stratus clouds improves the CAM4 cloud response to sea ice loss and increases July 2007 surface (TOA) energy budgets over newly open water by +11 W m−2 (+14.9 W m−2). Of importance to high-latitude climate feedbacks, unrealistic stratus cloud compensation for sea ice loss occurs only when stable and dry atmospheric conditions exist. Therefore, coupled climate projections that use CAM4 will underpredict Arctic sea ice loss only when dry and stable summer conditions occur.
Abstract
This study documents and evaluates the boundary layer and energy budget response to record low 2007 sea ice extents in the Community Atmosphere Model version 4 (CAM4) using 1-day observationally constrained forecasts and 10-yr runs with a freely evolving atmosphere. While near-surface temperature and humidity are minimally affected by sea ice loss in July 2007 forecasts, near-surface stability decreases and atmospheric humidity increases aloft over newly open water in September 2007 forecasts. Ubiquitous low cloud increases over the newly ice-free Arctic Ocean are found in both the July 2007 and the September 2007 forecasts. In response to the 2007 sea ice loss, net surface [top of the atmosphere (TOA)] energy budgets change by +19.4 W m−2 (+21.0 W m−2) and −17.9 W m−2 (+1.4 W m−2) in the July 2007 and September 2007 forecasts, respectively. While many aspects of the forecasted response to sea ice loss are consistent with physical expectations and available observations, CAM4’s ubiquitous July 2007 cloud increases over newly open water are not. The unrealistic cloud response results from the global application of parameterization designed to diagnose stratus clouds based on lower-tropospheric stability (CLDST). In the Arctic, the well-mixed boundary layer assumption implicit in CLDST is violated. Requiring a well-mixed boundary layer to diagnose stratus clouds improves the CAM4 cloud response to sea ice loss and increases July 2007 surface (TOA) energy budgets over newly open water by +11 W m−2 (+14.9 W m−2). Of importance to high-latitude climate feedbacks, unrealistic stratus cloud compensation for sea ice loss occurs only when stable and dry atmospheric conditions exist. Therefore, coupled climate projections that use CAM4 will underpredict Arctic sea ice loss only when dry and stable summer conditions occur.
Abstract
The present study examines the simulation of the number of tropical storms produced in GCM integrations with a prescribed SST. A 9-member ensemble of 10-yr integrations (1979–88) of a T42 atmospheric model forced by observed SSTs has been produced; each ensemble member differs only in the initial atmospheric conditions. An objective procedure for tracking-model-generated tropical storms is applied to this ensemble during the last 9 yr of the integrations (1980–88). The seasonal and monthly variations of tropical storm numbers are compared with observations for each ocean basin.
Statistical tools such as the Chi-square test, the F test, and the t test are applied to the ensemble number of tropical storms, leading to the conclusion that the potential predictability is particularly strong over the western North Pacific and the eastern North Pacific, and to a lesser extent over the western North Atlantic. A set of tools including the joint probability distribution and the ranked probability score are used to evaluate the simulation skill of this ensemble simulation. The simulation skill over the western North Atlantic basin appears to be exceptionally high, particularly during years of strong potential predictability.
Abstract
The present study examines the simulation of the number of tropical storms produced in GCM integrations with a prescribed SST. A 9-member ensemble of 10-yr integrations (1979–88) of a T42 atmospheric model forced by observed SSTs has been produced; each ensemble member differs only in the initial atmospheric conditions. An objective procedure for tracking-model-generated tropical storms is applied to this ensemble during the last 9 yr of the integrations (1980–88). The seasonal and monthly variations of tropical storm numbers are compared with observations for each ocean basin.
Statistical tools such as the Chi-square test, the F test, and the t test are applied to the ensemble number of tropical storms, leading to the conclusion that the potential predictability is particularly strong over the western North Pacific and the eastern North Pacific, and to a lesser extent over the western North Atlantic. A set of tools including the joint probability distribution and the ranked probability score are used to evaluate the simulation skill of this ensemble simulation. The simulation skill over the western North Atlantic basin appears to be exceptionally high, particularly during years of strong potential predictability.
Abstract
The TOPEX/Poseidon and ERS-1/2 satellites have now been observing sea level anomalies for a continuous time span of more than 6 yr. These sea level observations are first compared with tide gauge data and then assimilated into an ocean model that is used to initialize coupled ocean–atmosphere forecasts with a lead time of 6 months. Ocean analyses in which altimeter data are assimilated are compared with those from a no-assimilation experiment and with analyses in which subsurface temperature observations are assimilated. Analyses with altimeter data show variations of upper-ocean heat content similar to analyses using subsurface observations, whereas the ocean model has large errors when no data are assimilated. However, obtaining good results from the assimilation of altimeter data is not straightforward: it is essential to add a good mean sea level to the observed anomalies, to filter the sea level observations appropriately, to start the analyses from realistic initial temperature and salinity fields, and to assign appropriate weights for the analyzed increments.
To assess the impact of altimeter data assimilation on the coupled system, ensemble hindcasts are initialized from ocean analyses in which either no data, subsurface temperatures, or sea level observations were assimilated. For each kind of ocean analysis, a five-member ensemble is started every 3 months from January 1993 to October 1997, adding up to 100 forecasts for each type. The predicted SST anomalies for the equatorial Pacific are intercompared between the experiments and against observations. The predicted anomalies are on average closer to observed values when forecasts are initialized from the ocean analysis using altimeter data than when initialized from the no-assimilation ocean analysis, and forecast errors appear to be only slightly larger than for forecasts initialized from ocean analyses using subsurface temperatures. However, even based on 100 coupled forecasts, the distinction between the two experiments that benefit from data assimilation is barely statistically significant. The verification should still be considered preliminary, because the period covered by the forecasts is only 5 yr, which is too short properly to sample ENSO variability. It is, nonetheless, encouraging that altimeter assimilation can improve the forecast skill to a level comparable to that obtained from using Tropical Ocean Atmosphere–expendable bathythermograph data.
Abstract
The TOPEX/Poseidon and ERS-1/2 satellites have now been observing sea level anomalies for a continuous time span of more than 6 yr. These sea level observations are first compared with tide gauge data and then assimilated into an ocean model that is used to initialize coupled ocean–atmosphere forecasts with a lead time of 6 months. Ocean analyses in which altimeter data are assimilated are compared with those from a no-assimilation experiment and with analyses in which subsurface temperature observations are assimilated. Analyses with altimeter data show variations of upper-ocean heat content similar to analyses using subsurface observations, whereas the ocean model has large errors when no data are assimilated. However, obtaining good results from the assimilation of altimeter data is not straightforward: it is essential to add a good mean sea level to the observed anomalies, to filter the sea level observations appropriately, to start the analyses from realistic initial temperature and salinity fields, and to assign appropriate weights for the analyzed increments.
To assess the impact of altimeter data assimilation on the coupled system, ensemble hindcasts are initialized from ocean analyses in which either no data, subsurface temperatures, or sea level observations were assimilated. For each kind of ocean analysis, a five-member ensemble is started every 3 months from January 1993 to October 1997, adding up to 100 forecasts for each type. The predicted SST anomalies for the equatorial Pacific are intercompared between the experiments and against observations. The predicted anomalies are on average closer to observed values when forecasts are initialized from the ocean analysis using altimeter data than when initialized from the no-assimilation ocean analysis, and forecast errors appear to be only slightly larger than for forecasts initialized from ocean analyses using subsurface temperatures. However, even based on 100 coupled forecasts, the distinction between the two experiments that benefit from data assimilation is barely statistically significant. The verification should still be considered preliminary, because the period covered by the forecasts is only 5 yr, which is too short properly to sample ENSO variability. It is, nonetheless, encouraging that altimeter assimilation can improve the forecast skill to a level comparable to that obtained from using Tropical Ocean Atmosphere–expendable bathythermograph data.
Abstract
Using weather station data, the parameters of a stationary stochastic weather model (SSWM) for daily precipitation over the contiguous United States are estimated. By construct, the model exactly captures the variance component of seasonal precipitation characteristics (intensity, occurrence, and total amount) arising from high-frequency variance. By comparing the variance of the lower-frequency accumulations (on the order of months) between the SSWM and the original measurements, potential predictability (PP) is estimated. Decomposing the variability into contributions from occurrence and intensity allows one to establish two contributing sources of total PP. Aggregated occurrence is found to have higher PP than either intensity or the seasonal total precipitation, and occurrence and intensity are found to interfere destructively when convolved into seasonal totals. It is recommended that efforts aimed at forecasting seasonal precipitation or attributing climate variability to particular processes should analyze occurrence and intensity separately to maximize signal-to-noise ratios. Significant geographical and seasonal variations exist in all PP components.
Abstract
Using weather station data, the parameters of a stationary stochastic weather model (SSWM) for daily precipitation over the contiguous United States are estimated. By construct, the model exactly captures the variance component of seasonal precipitation characteristics (intensity, occurrence, and total amount) arising from high-frequency variance. By comparing the variance of the lower-frequency accumulations (on the order of months) between the SSWM and the original measurements, potential predictability (PP) is estimated. Decomposing the variability into contributions from occurrence and intensity allows one to establish two contributing sources of total PP. Aggregated occurrence is found to have higher PP than either intensity or the seasonal total precipitation, and occurrence and intensity are found to interfere destructively when convolved into seasonal totals. It is recommended that efforts aimed at forecasting seasonal precipitation or attributing climate variability to particular processes should analyze occurrence and intensity separately to maximize signal-to-noise ratios. Significant geographical and seasonal variations exist in all PP components.
Abstract
Tropical storms simulated by a nine-member ensemble of GCM integrations forced by observed SSTs have been tracked by an objective procedure for the period 1980–88. Statistics on tropical storm frequency, intensity, and first location have been produced. Statistical tools such as the chi-square and the Kolmogorov–Smirnov test indicate that there is significant potential predictability of interannual variability of simulated tropical storm frequency, intensity, and first location over most of the ocean basins. The only common point between the nine members of the ensemble is the SST forcing. This implies that SSTs play a fundamental role in model tropical storm frequency, intensity, and first location interannual variability. Although the interannual variability of tropical storm statistics is clearly affected by SST forcing in the GCM, there is also a considerable amount of noise related to internal variability of the model. An ensemble of atmospheric model simulations allows one to filter this noise and gain a better understanding of the mechanisms leading to interannual tropical storm variability.
An EOF analysis of local SSTs over each ocean basin and a combined EOF analysis of vertical wind shear, 850-mb vorticity, and 200-mb vorticity have been performed. Over some ocean basins such as the western North Atlantic, the interannual frequency of simulated tropical storms is highly correlated to the first combined EOF, but it is not significantly correlated to the first EOF of local SSTs. This suggests that over these basins the SSTs have an impact on the simulated tropical storm statistics from a remote area through the large-scale circulation as in observations. Simulated and observed tropical storm statistics have been compared. The interannual variability of simulated tropical storm statistics is consistent with observations over the ocean basins where the model simulates a realistic interannual variability of the large-scale circulation.
Abstract
Tropical storms simulated by a nine-member ensemble of GCM integrations forced by observed SSTs have been tracked by an objective procedure for the period 1980–88. Statistics on tropical storm frequency, intensity, and first location have been produced. Statistical tools such as the chi-square and the Kolmogorov–Smirnov test indicate that there is significant potential predictability of interannual variability of simulated tropical storm frequency, intensity, and first location over most of the ocean basins. The only common point between the nine members of the ensemble is the SST forcing. This implies that SSTs play a fundamental role in model tropical storm frequency, intensity, and first location interannual variability. Although the interannual variability of tropical storm statistics is clearly affected by SST forcing in the GCM, there is also a considerable amount of noise related to internal variability of the model. An ensemble of atmospheric model simulations allows one to filter this noise and gain a better understanding of the mechanisms leading to interannual tropical storm variability.
An EOF analysis of local SSTs over each ocean basin and a combined EOF analysis of vertical wind shear, 850-mb vorticity, and 200-mb vorticity have been performed. Over some ocean basins such as the western North Atlantic, the interannual frequency of simulated tropical storms is highly correlated to the first combined EOF, but it is not significantly correlated to the first EOF of local SSTs. This suggests that over these basins the SSTs have an impact on the simulated tropical storm statistics from a remote area through the large-scale circulation as in observations. Simulated and observed tropical storm statistics have been compared. The interannual variability of simulated tropical storm statistics is consistent with observations over the ocean basins where the model simulates a realistic interannual variability of the large-scale circulation.
Abstract
Projections of modeled precipitation (P) change in global warming scenarios demonstrate marked intermodel disagreement at regional scales. Empirical orthogonal functions (EOFs) and maximum covariance analysis (MCA) are used to diagnose spatial patterns of disagreement in the simulated climatology and end-of-century P changes in phase 5 of the Coupled Model Intercomparison Project (CMIP5) archive. The term principal uncertainty pattern (PUP) is used for any robust mode calculated when applying these techniques to a multimodel ensemble. For selected domains in the tropics, leading PUPs highlight features at the margins of convection zones and in the Pacific cold tongue. The midlatitude Pacific storm track is emphasized given its relevance to wintertime P projections over western North America. The first storm-track PUP identifies a sensitive region of disagreement in P increases over the eastern midlatitude Pacific where the storm track terminates, related to uncertainty in an eastward extension of the climatological jet. The second PUP portrays uncertainty in a zonally asymmetric meridional shift of storm-track P, related to uncertainty in the extent of a poleward jet shift in the western Pacific. Both modes appear to arise primarily from intermodel differences in the response to radiative forcing, distinct from sampling of internal variability. The leading storm-track PUPs for P and zonal wind change exhibit similarities to the leading uncertainty patterns for the historical climatology, indicating important and parallel sensitivities in the eastern Pacific storm-track terminus region. However, expansion coefficients for climatological uncertainties tend to be weakly correlated with those for end-of-century change.
Abstract
Projections of modeled precipitation (P) change in global warming scenarios demonstrate marked intermodel disagreement at regional scales. Empirical orthogonal functions (EOFs) and maximum covariance analysis (MCA) are used to diagnose spatial patterns of disagreement in the simulated climatology and end-of-century P changes in phase 5 of the Coupled Model Intercomparison Project (CMIP5) archive. The term principal uncertainty pattern (PUP) is used for any robust mode calculated when applying these techniques to a multimodel ensemble. For selected domains in the tropics, leading PUPs highlight features at the margins of convection zones and in the Pacific cold tongue. The midlatitude Pacific storm track is emphasized given its relevance to wintertime P projections over western North America. The first storm-track PUP identifies a sensitive region of disagreement in P increases over the eastern midlatitude Pacific where the storm track terminates, related to uncertainty in an eastward extension of the climatological jet. The second PUP portrays uncertainty in a zonally asymmetric meridional shift of storm-track P, related to uncertainty in the extent of a poleward jet shift in the western Pacific. Both modes appear to arise primarily from intermodel differences in the response to radiative forcing, distinct from sampling of internal variability. The leading storm-track PUPs for P and zonal wind change exhibit similarities to the leading uncertainty patterns for the historical climatology, indicating important and parallel sensitivities in the eastern Pacific storm-track terminus region. However, expansion coefficients for climatological uncertainties tend to be weakly correlated with those for end-of-century change.
Abstract
While low-frequency variations in precipitation amount, occurrence counts (hereafter “occurrence”), and intensity can take place on seasonal to multidecadal time scales, it is often unclear at which time scales these precipitation variations can be ascribed to potentially predictable, climate-induced changes versus simple, stochastic (i.e., random) precipitation event evolutions. This paper seeks to isolate the dominant time scales at which potentially predictable changes in observed precipitation characteristics occur over the continental United States and analyze sources of revealed potentially predictable precipitation variations for particular regions. The results highlight that at interannual time scales (here defined as those shorter than 7 years), the potential for predicting annual precipitation amounts tends to be higher than for annual event occurrence or intensity, with interannual potential predictability highest in both relatively dry and wet locations and lowest in transition regions. By contrast, at time scales greater than 7 years the potential for predicting annual event occurrence tends to be higher than amount or intensity, with >20-yr time scale potential predictability highest in relatively wet locations and lowest in relatively dry locations. To highlight the utility of this type of analysis, two robust signals are selected for further investigation, including 1) approximately 10-yr time scale variations in potentially predictable annual amounts over the northwestern United States and 2) 20–60-yr time scale variations in potentially predictable annual event occurrence over the southwestern United States. While mechanistic drivers for these observed variations are still being investigated, concurrent and precursor climate-state estimates in the atmosphere and ocean—principally over the Pacific sector—are provided, the monitoring of which may help realize the potential for predicting precipitation variations in these regions.
Abstract
While low-frequency variations in precipitation amount, occurrence counts (hereafter “occurrence”), and intensity can take place on seasonal to multidecadal time scales, it is often unclear at which time scales these precipitation variations can be ascribed to potentially predictable, climate-induced changes versus simple, stochastic (i.e., random) precipitation event evolutions. This paper seeks to isolate the dominant time scales at which potentially predictable changes in observed precipitation characteristics occur over the continental United States and analyze sources of revealed potentially predictable precipitation variations for particular regions. The results highlight that at interannual time scales (here defined as those shorter than 7 years), the potential for predicting annual precipitation amounts tends to be higher than for annual event occurrence or intensity, with interannual potential predictability highest in both relatively dry and wet locations and lowest in transition regions. By contrast, at time scales greater than 7 years the potential for predicting annual event occurrence tends to be higher than amount or intensity, with >20-yr time scale potential predictability highest in relatively wet locations and lowest in relatively dry locations. To highlight the utility of this type of analysis, two robust signals are selected for further investigation, including 1) approximately 10-yr time scale variations in potentially predictable annual amounts over the northwestern United States and 2) 20–60-yr time scale variations in potentially predictable annual event occurrence over the southwestern United States. While mechanistic drivers for these observed variations are still being investigated, concurrent and precursor climate-state estimates in the atmosphere and ocean—principally over the Pacific sector—are provided, the monitoring of which may help realize the potential for predicting precipitation variations in these regions.