Browse
Abstract
In this study, we analyze various sources of CAPE in the environment and their contributions to its time tendency that will complement forecast models and operational analyses that are relatively temporally (∼1 h) coarse. As a case study, the relative roles of direct insolation and near-surface moisture advection in the recovery CAPE on 31 March 2016 in northern Alabama are examined using VORTEX-Southeast (VORTEX-SE) observations and numerical simulations. In between rounds of nontornadic morning storms and tornadic evening storms, CAPE over the VORTEX-SE domain increased from near zero to at least 500 J kg−1. A timeline of the day’s events is provided with a focus on the evolution of the lower levels of the atmosphere. We focus on its responses to solar insolation and moisture advection, which we hypothesize as the main mechanisms behind the recovery of CAPE. Data from the University of Massachusetts S-Band frequency-modulated, continuous-wave (FMCW) radar and NOAA National Severe Storms Laboratory (NSSL) Collaborative Lower Atmospheric Mobile Profiling System (CLAMPS), and high-resolution EnKF analyses from the Advanced Regional Prediction System (ARPS) are used to characterize the boundary layer evolution in the pre-tornadic storm environment. It is found that insolation-driven surface diabatic heating was the primary driver of rapid CAPE recovery on this day. The methodology developed in this case can be applied in other scenarios to diagnose the primary drivers of CAPE development.
Significance Statement
The mechanisms by which atmospheric instability recovers can vary widely and are often a source of uncertainty in forecasting. We want to understand how and why the environment destabilized enough to produce an evening tornado following morning storms on 31 March 2016. To do this, we used model data and observations from a collocated radar and profiler. It was found that heating from the sun at the surface was the primary cause of destabilization in the environment.
Abstract
In this study, we analyze various sources of CAPE in the environment and their contributions to its time tendency that will complement forecast models and operational analyses that are relatively temporally (∼1 h) coarse. As a case study, the relative roles of direct insolation and near-surface moisture advection in the recovery CAPE on 31 March 2016 in northern Alabama are examined using VORTEX-Southeast (VORTEX-SE) observations and numerical simulations. In between rounds of nontornadic morning storms and tornadic evening storms, CAPE over the VORTEX-SE domain increased from near zero to at least 500 J kg−1. A timeline of the day’s events is provided with a focus on the evolution of the lower levels of the atmosphere. We focus on its responses to solar insolation and moisture advection, which we hypothesize as the main mechanisms behind the recovery of CAPE. Data from the University of Massachusetts S-Band frequency-modulated, continuous-wave (FMCW) radar and NOAA National Severe Storms Laboratory (NSSL) Collaborative Lower Atmospheric Mobile Profiling System (CLAMPS), and high-resolution EnKF analyses from the Advanced Regional Prediction System (ARPS) are used to characterize the boundary layer evolution in the pre-tornadic storm environment. It is found that insolation-driven surface diabatic heating was the primary driver of rapid CAPE recovery on this day. The methodology developed in this case can be applied in other scenarios to diagnose the primary drivers of CAPE development.
Significance Statement
The mechanisms by which atmospheric instability recovers can vary widely and are often a source of uncertainty in forecasting. We want to understand how and why the environment destabilized enough to produce an evening tornado following morning storms on 31 March 2016. To do this, we used model data and observations from a collocated radar and profiler. It was found that heating from the sun at the surface was the primary cause of destabilization in the environment.
Abstract
Atmospheric reanalyses are widely used to estimate the past atmospheric near-surface state over sea ice. They provide boundary conditions for sea ice and ocean numerical simulations and relevant information for studying polar variability and anthropogenic climate change. Previous research revealed the existence of large near-surface temperature biases (mostly warm) over the Arctic sea ice in the current generation of atmospheric reanalyses, which is linked to a poor representation of the snow over the sea ice and the stably stratified boundary layer in the forecast models used to produce the reanalyses. These errors can compromise the employment of reanalysis products in support of polar research. Here, we train a fully connected neural network that learns from remote sensing infrared temperature observations to correct the existing generation of uncoupled atmospheric reanalyses (ERA5, JRA-55) based on a set of sea ice and atmospheric predictors, which are themselves reanalysis products. The advantages of the proposed correction scheme over previous calibration attempts are the consideration of the synoptic weather and cloud state, compatibility of the predictors with the mechanism responsible for the bias, and a self-emerging seasonality and multidecadal trend consistent with the declining sea ice state in the Arctic. The correction leads on average to a 27% temperature bias reduction for ERA5 and 7% for JRA-55 if compared to independent in situ observations from the MOSAiC campaign (respectively, 32% and 10% under clear-sky conditions). These improvements can be beneficial for forced sea ice and ocean simulations, which rely on reanalyses surface fields as boundary conditions.
Significance Statement
This study illustrates a novel method based on machine learning for reducing the systematic surface temperature errors that characterize multiple atmospheric reanalyses in sea ice–covered regions of the Arctic under clear-sky conditions. The correction applied to the temperature field is consistent with the local weather and the sea ice and snow conditions, meaning that it responds to seasonal changes in sea ice cover as well as to its long-term decline due to global warming. The corrected reanalysis temperature can be employed to support polar research activities, and in particular to better simulate the evolution of the interacting sea ice and ocean system within numerical models.
Abstract
Atmospheric reanalyses are widely used to estimate the past atmospheric near-surface state over sea ice. They provide boundary conditions for sea ice and ocean numerical simulations and relevant information for studying polar variability and anthropogenic climate change. Previous research revealed the existence of large near-surface temperature biases (mostly warm) over the Arctic sea ice in the current generation of atmospheric reanalyses, which is linked to a poor representation of the snow over the sea ice and the stably stratified boundary layer in the forecast models used to produce the reanalyses. These errors can compromise the employment of reanalysis products in support of polar research. Here, we train a fully connected neural network that learns from remote sensing infrared temperature observations to correct the existing generation of uncoupled atmospheric reanalyses (ERA5, JRA-55) based on a set of sea ice and atmospheric predictors, which are themselves reanalysis products. The advantages of the proposed correction scheme over previous calibration attempts are the consideration of the synoptic weather and cloud state, compatibility of the predictors with the mechanism responsible for the bias, and a self-emerging seasonality and multidecadal trend consistent with the declining sea ice state in the Arctic. The correction leads on average to a 27% temperature bias reduction for ERA5 and 7% for JRA-55 if compared to independent in situ observations from the MOSAiC campaign (respectively, 32% and 10% under clear-sky conditions). These improvements can be beneficial for forced sea ice and ocean simulations, which rely on reanalyses surface fields as boundary conditions.
Significance Statement
This study illustrates a novel method based on machine learning for reducing the systematic surface temperature errors that characterize multiple atmospheric reanalyses in sea ice–covered regions of the Arctic under clear-sky conditions. The correction applied to the temperature field is consistent with the local weather and the sea ice and snow conditions, meaning that it responds to seasonal changes in sea ice cover as well as to its long-term decline due to global warming. The corrected reanalysis temperature can be employed to support polar research activities, and in particular to better simulate the evolution of the interacting sea ice and ocean system within numerical models.
Abstract
Midlatitude cyclones approaching coastal mountain ranges experience flow modifications on a variety of scales including orographic lift, blocking, mountain waves, and valley flows. During the 2015/16 Olympic Mountain Experiment (OLYMPEX), a pair of scanning ground radars observed precipitating clouds as they were modified by these orographically induced flows. The DOW radar, positioned to scan up the windward Quinault Valley, conducted RHI scans during 285 h of precipitation, 80% of which contained reversed, down-valley flow at lower levels. The existence of down-valley flow in the Quinault Valley was found to be well correlated with upstream flow blocking and the large-scale sea level pressure gradient orientated down the valley. Deep down-valley flow occurred in environments with high moist static stability and southerly winds, conditions that are common in prefrontal sectors of midlatitude cyclones in the coastal Pacific Northwest. Finally, a case study of prolonged down-valley flow in a prefrontal storm sector was simulated to investigate whether latent heat absorption (cooling) contributed to the event. Three experiments were conducted: a Control simulation and two simulations where the temperature tendencies from melting and evaporation were separately turned off. Results indicated that evaporative cooling had a stronger impact on the event’s down-valley flow than melting, likely because evaporation occurred within the low-level down-valley flow layer. Through these experiments, we show that evaporation helped prolong down-valley flow for several hours past the time of the event’s warm frontal passage.
Significance Statement
This paper analyzes the characteristics of down-valley flow over the windward Quinault Valley on the Olympic Peninsula of Washington State using data from OLYMPEX, with an emphasis on regional pressure differences and blocking metrics. Results demonstrate that the location of precipitation over the Olympic Peninsula is shifted upstream during events with deep down-valley flow, consistent with blocked upstream airflow. A case study of down-valley flow highlights the role of evaporative cooling to prolong the flow reversal.
Abstract
Midlatitude cyclones approaching coastal mountain ranges experience flow modifications on a variety of scales including orographic lift, blocking, mountain waves, and valley flows. During the 2015/16 Olympic Mountain Experiment (OLYMPEX), a pair of scanning ground radars observed precipitating clouds as they were modified by these orographically induced flows. The DOW radar, positioned to scan up the windward Quinault Valley, conducted RHI scans during 285 h of precipitation, 80% of which contained reversed, down-valley flow at lower levels. The existence of down-valley flow in the Quinault Valley was found to be well correlated with upstream flow blocking and the large-scale sea level pressure gradient orientated down the valley. Deep down-valley flow occurred in environments with high moist static stability and southerly winds, conditions that are common in prefrontal sectors of midlatitude cyclones in the coastal Pacific Northwest. Finally, a case study of prolonged down-valley flow in a prefrontal storm sector was simulated to investigate whether latent heat absorption (cooling) contributed to the event. Three experiments were conducted: a Control simulation and two simulations where the temperature tendencies from melting and evaporation were separately turned off. Results indicated that evaporative cooling had a stronger impact on the event’s down-valley flow than melting, likely because evaporation occurred within the low-level down-valley flow layer. Through these experiments, we show that evaporation helped prolong down-valley flow for several hours past the time of the event’s warm frontal passage.
Significance Statement
This paper analyzes the characteristics of down-valley flow over the windward Quinault Valley on the Olympic Peninsula of Washington State using data from OLYMPEX, with an emphasis on regional pressure differences and blocking metrics. Results demonstrate that the location of precipitation over the Olympic Peninsula is shifted upstream during events with deep down-valley flow, consistent with blocked upstream airflow. A case study of down-valley flow highlights the role of evaporative cooling to prolong the flow reversal.
Abstract
It has been long recognized that in the retrieved thermodynamic fields using multiple-Doppler radar synthesized winds, an unknown constant exists on each horizontal level, leading to an ambiguity in the retrieved vertical structure. In this study, the traditional thermodynamic retrieval scheme is significantly improved by the implementation of the Equation of State (EoS) as an additional constraint. With this new formulation, the ambiguity of the vertical structure can be explicitly identified and removed from the retrieved three-dimensional thermodynamic fields. The only in situ independent observations needed to perform the correction are the pressure and temperature measurements taken at a single surface station. If data from multiple surface stations are available, a strategy is proposed to obtain a better estimate of the unknown constant. Experiments in this research were conducted under the observation system simulation experiment (OSSE) framework to demonstrate the validity of the new approach. Problems and possible solutions associated with using real datasets and potential future extended applications of this new method are discussed.
Abstract
It has been long recognized that in the retrieved thermodynamic fields using multiple-Doppler radar synthesized winds, an unknown constant exists on each horizontal level, leading to an ambiguity in the retrieved vertical structure. In this study, the traditional thermodynamic retrieval scheme is significantly improved by the implementation of the Equation of State (EoS) as an additional constraint. With this new formulation, the ambiguity of the vertical structure can be explicitly identified and removed from the retrieved three-dimensional thermodynamic fields. The only in situ independent observations needed to perform the correction are the pressure and temperature measurements taken at a single surface station. If data from multiple surface stations are available, a strategy is proposed to obtain a better estimate of the unknown constant. Experiments in this research were conducted under the observation system simulation experiment (OSSE) framework to demonstrate the validity of the new approach. Problems and possible solutions associated with using real datasets and potential future extended applications of this new method are discussed.
Abstract
Reliably quantifying uncertainty in precipitation forecasts remains a critical challenge. This work examines the application of a deep learning (DL) architecture, Unet, for postprocessing deterministic numerical weather predictions of precipitation to improve their skills and for deriving forecast uncertainty. Daily accumulated 0–4-day precipitation forecasts are generated from a 34-yr reforecast based on the West Weather Research and Forecasting (West-WRF) mesoscale model, developed by the Center for Western Weather and Water Extremes. The Unet learns the distributional parameters associated with a censored, shifted gamma distribution. In addition, the DL framework is tested against state-of-the-art benchmark methods, including an analog ensemble, nonhomogeneous regression, and mixed-type meta-Gaussian distribution. These methods are evaluated over four years of data and the western United States. The Unet outperforms the benchmark methods at all lead times as measured by continuous ranked probability and Brier skill scores. The Unet also produces a reliable estimation of forecast uncertainty, as measured by binned spread–skill relationship diagrams. Additionally, the Unet has the best performance for extreme events (i.e., the 95th and 99th percentiles of the distribution) and for these cases, its performance improves as more training data are available.
Significance Statement
Accurate precipitation forecasts are critical for social and economic sectors. They also play an important role in our daily activity planning. The objective of this research is to investigate how to use a deep learning architecture to postprocess high-resolution (4 km) precipitation forecasts and generate accurate and reliable forecasts with quantified uncertainty. The proposed approach performs well with extreme cases and its performance improves as more data are available in training.
Abstract
Reliably quantifying uncertainty in precipitation forecasts remains a critical challenge. This work examines the application of a deep learning (DL) architecture, Unet, for postprocessing deterministic numerical weather predictions of precipitation to improve their skills and for deriving forecast uncertainty. Daily accumulated 0–4-day precipitation forecasts are generated from a 34-yr reforecast based on the West Weather Research and Forecasting (West-WRF) mesoscale model, developed by the Center for Western Weather and Water Extremes. The Unet learns the distributional parameters associated with a censored, shifted gamma distribution. In addition, the DL framework is tested against state-of-the-art benchmark methods, including an analog ensemble, nonhomogeneous regression, and mixed-type meta-Gaussian distribution. These methods are evaluated over four years of data and the western United States. The Unet outperforms the benchmark methods at all lead times as measured by continuous ranked probability and Brier skill scores. The Unet also produces a reliable estimation of forecast uncertainty, as measured by binned spread–skill relationship diagrams. Additionally, the Unet has the best performance for extreme events (i.e., the 95th and 99th percentiles of the distribution) and for these cases, its performance improves as more training data are available.
Significance Statement
Accurate precipitation forecasts are critical for social and economic sectors. They also play an important role in our daily activity planning. The objective of this research is to investigate how to use a deep learning architecture to postprocess high-resolution (4 km) precipitation forecasts and generate accurate and reliable forecasts with quantified uncertainty. The proposed approach performs well with extreme cases and its performance improves as more data are available in training.
Abstract
A multiscale alignment (MSA) ensemble filtering method was introduced by Ying to reduce nonlinear position errors effectively during data assimilation. The MSA method extends the traditional ensemble Kalman filter (EnKF) to update states from large to small scales sequentially, during which it leverages the displacement vectors derived from the large-scale analysis increments to reduce position errors at smaller scales through warping of the model grid. This study stress tests the MSA method in various scenarios using an idealized vortex model. We show that the MSA improves filter performance as number of scales (Ns ) increases in the presence of nonlinear position errors. We tuned localization parameters for the cross-scale EnKF updates to find the best performance when assimilating an observation network. To further reduce the scale mismatch between observations and states, a new option called MSA-O is introduced to decompose observations into scale components during assimilation. Cycling DA experiments show that the MSA-O consistently outperforms the traditional EnKF at equal computational cost. A more challenging scenario for the MSA is identified when the large-scale background flow and the small-scale vortex are incoherent in terms of their errors, making the displacement vectors not effective in reducing vortex position errors. Observation availability for the small scales also limits the use of large Ns for the MSA. Potential remedies for these issues are discussed.
Abstract
A multiscale alignment (MSA) ensemble filtering method was introduced by Ying to reduce nonlinear position errors effectively during data assimilation. The MSA method extends the traditional ensemble Kalman filter (EnKF) to update states from large to small scales sequentially, during which it leverages the displacement vectors derived from the large-scale analysis increments to reduce position errors at smaller scales through warping of the model grid. This study stress tests the MSA method in various scenarios using an idealized vortex model. We show that the MSA improves filter performance as number of scales (Ns ) increases in the presence of nonlinear position errors. We tuned localization parameters for the cross-scale EnKF updates to find the best performance when assimilating an observation network. To further reduce the scale mismatch between observations and states, a new option called MSA-O is introduced to decompose observations into scale components during assimilation. Cycling DA experiments show that the MSA-O consistently outperforms the traditional EnKF at equal computational cost. A more challenging scenario for the MSA is identified when the large-scale background flow and the small-scale vortex are incoherent in terms of their errors, making the displacement vectors not effective in reducing vortex position errors. Observation availability for the small scales also limits the use of large Ns for the MSA. Potential remedies for these issues are discussed.
Abstract
Numerical weather prediction models contain parameters that are inherently uncertain and cannot be determined exactly. It is thus desirable to have reliable objective approaches for estimation of optimal values and uncertainties of these parameters. Traditionally, the parameter tuning has been done manually, which can lead to the tuning process being a maze of subjective choices. In this paper we present how to optimize 20 key physical parameters in the atmospheric model Open Integrated Forecasting System (OpenIFS) that have a strong impact on forecast quality. The results show that simultaneous optimization of O(20) parameters is possible with O(100) algorithm steps using an ensemble of O(20) members; the results also show that the optimized parameters lead to substantial enhancement of predictive skill. The enhanced predictive skill can be attributed to reduced biases in low-level winds and upper-tropospheric humidity in the optimized model. We find that the optimization process is dependent on the starting values of the parameters that are optimized (starting from better-suited values results in a better model). The results show also that the applicability of the tuned parameter values across different model resolutions is somewhat limited because of resolution-dependent model biases, and we also found that the parameter covariances provided by the tuning algorithm seem to be uninformative.
Significance Statement
The purpose of this work is to show how to use algorithmic methods to optimize a weather model in a computationally efficient manner. Traditional manual model tuning is an extremely laborious and time-consuming process, so algorithmic methods have strong potential for saving the model developers’ time and accelerating development. This paper shows that algorithmic optimization is possible and that weather forecasts can be improved. However, potential issues related to the use of the optimized parameter values across different model resolutions are discussed as well as other shortcomings related to the tuning process.
Abstract
Numerical weather prediction models contain parameters that are inherently uncertain and cannot be determined exactly. It is thus desirable to have reliable objective approaches for estimation of optimal values and uncertainties of these parameters. Traditionally, the parameter tuning has been done manually, which can lead to the tuning process being a maze of subjective choices. In this paper we present how to optimize 20 key physical parameters in the atmospheric model Open Integrated Forecasting System (OpenIFS) that have a strong impact on forecast quality. The results show that simultaneous optimization of O(20) parameters is possible with O(100) algorithm steps using an ensemble of O(20) members; the results also show that the optimized parameters lead to substantial enhancement of predictive skill. The enhanced predictive skill can be attributed to reduced biases in low-level winds and upper-tropospheric humidity in the optimized model. We find that the optimization process is dependent on the starting values of the parameters that are optimized (starting from better-suited values results in a better model). The results show also that the applicability of the tuned parameter values across different model resolutions is somewhat limited because of resolution-dependent model biases, and we also found that the parameter covariances provided by the tuning algorithm seem to be uninformative.
Significance Statement
The purpose of this work is to show how to use algorithmic methods to optimize a weather model in a computationally efficient manner. Traditional manual model tuning is an extremely laborious and time-consuming process, so algorithmic methods have strong potential for saving the model developers’ time and accelerating development. This paper shows that algorithmic optimization is possible and that weather forecasts can be improved. However, potential issues related to the use of the optimized parameter values across different model resolutions are discussed as well as other shortcomings related to the tuning process.
Abstract
A record-breaking precipitation event occurred in the Henan province of China in July 2021 (217HP). To identify the moisture source of the event, ensemble experiments with 120 members were conducted in this study. Results show that the precipitable water during this extreme event was primarily contributed by the low-level southeasterly (LLSE) water vapor transport. The LLSE was largely enhanced by the pressure gradient force maintained by the western Pacific subtropical high and further amplified by the latent heat release in the rainfall system over Henan. The positive moisture advection by the LLSE and evaporative water occurred below 950 hPa and was redistributed into higher levels by the LLSE jet-enhanced subgrid vertical turbulent transport. As a result, the combination of the enhanced LLSE centered around 950 hPa and the increase of moisture below 850 hPa were the main drivers for the continuous strengthening of LLSE moisture transport, with the former playing the dominant role. It is also found that not only the presence but also the intensity of the LLSE jet were important for reproducing the extreme rainfall. The impact of binary tropical cyclones In-Fa and Cempaka on the low-level moisture transport was also examined. We found that In-Fa (2021) presented an uncertain impact on 217HP, while Cempaka (2021) was found to be unfavorable for 217HP. Different from the LLSE water vapor transport, Cempaka mainly acted to weaken the southwesterly wind to the southwest of Henan by reducing the pressure gradient and impeding the water vapor transport toward Henan.
Abstract
A record-breaking precipitation event occurred in the Henan province of China in July 2021 (217HP). To identify the moisture source of the event, ensemble experiments with 120 members were conducted in this study. Results show that the precipitable water during this extreme event was primarily contributed by the low-level southeasterly (LLSE) water vapor transport. The LLSE was largely enhanced by the pressure gradient force maintained by the western Pacific subtropical high and further amplified by the latent heat release in the rainfall system over Henan. The positive moisture advection by the LLSE and evaporative water occurred below 950 hPa and was redistributed into higher levels by the LLSE jet-enhanced subgrid vertical turbulent transport. As a result, the combination of the enhanced LLSE centered around 950 hPa and the increase of moisture below 850 hPa were the main drivers for the continuous strengthening of LLSE moisture transport, with the former playing the dominant role. It is also found that not only the presence but also the intensity of the LLSE jet were important for reproducing the extreme rainfall. The impact of binary tropical cyclones In-Fa and Cempaka on the low-level moisture transport was also examined. We found that In-Fa (2021) presented an uncertain impact on 217HP, while Cempaka (2021) was found to be unfavorable for 217HP. Different from the LLSE water vapor transport, Cempaka mainly acted to weaken the southwesterly wind to the southwest of Henan by reducing the pressure gradient and impeding the water vapor transport toward Henan.
Abstract
The momentum roughness length (z 0) significantly impacts wind predictions in weather and climate models. Nevertheless, the impacts of z 0 parameterizations in different wind regimes and various model configurations on the hurricane size, intensity, and track simulations have not been thoroughly established. To bridge this knowledge gap, a comprehensive analysis of 310 simulations of 10 real hurricanes using the Weather Research and Forecasting (WRF) Model is conducted in comparison with observations. Our results show that the default z 0 parameterizations in WRF perform well for weak (category 1–2) hurricanes; however, they underestimate the intensities of strong (category 3–5) hurricanes. This finding is independent of model resolution or boundary layer schemes. The default values of z 0 in WRF agree with the observational estimates from dropsonde data in weak hurricanes while they are much larger than observations in strong hurricanes regime. Decreasing z 0 close to the values of observational estimates and theoretical hurricane intensity models in high wind regimes (≳45 m s−1) led to significant improvements in the intensity forecasts of strong hurricanes. A momentum budget analysis dynamically explained why the reduction of z 0 (decreased surface turbulent stresses) leads to stronger simulated storms.
Abstract
The momentum roughness length (z 0) significantly impacts wind predictions in weather and climate models. Nevertheless, the impacts of z 0 parameterizations in different wind regimes and various model configurations on the hurricane size, intensity, and track simulations have not been thoroughly established. To bridge this knowledge gap, a comprehensive analysis of 310 simulations of 10 real hurricanes using the Weather Research and Forecasting (WRF) Model is conducted in comparison with observations. Our results show that the default z 0 parameterizations in WRF perform well for weak (category 1–2) hurricanes; however, they underestimate the intensities of strong (category 3–5) hurricanes. This finding is independent of model resolution or boundary layer schemes. The default values of z 0 in WRF agree with the observational estimates from dropsonde data in weak hurricanes while they are much larger than observations in strong hurricanes regime. Decreasing z 0 close to the values of observational estimates and theoretical hurricane intensity models in high wind regimes (≳45 m s−1) led to significant improvements in the intensity forecasts of strong hurricanes. A momentum budget analysis dynamically explained why the reduction of z 0 (decreased surface turbulent stresses) leads to stronger simulated storms.
Abstract
During the last week of June 2021, the Pacific Northwest region of North America experienced a record-breaking heatwave of historic proportions. All-time high temperature records were shattered, often by several degrees, across many locations, with Canada setting a new national record, the state of Washington setting a new record, and the state of Oregon tying its previous record. Here we diagnose key meteorology that contributed to this heatwave. The event was associated with a highly anomalous midtropospheric ridge, with peak 500-hPa geopotential height anomalies centered over central British Columbia. This ridge developed over several days as part of a large-scale wave train. Back trajectory analysis indicates that synoptic-scale subsidence and associated adiabatic warming played a key role in enhancing the magnitude of the heat to the south of the ridge peak, while diabatic heating was dominant closer to the ridge center. Easterly/offshore flow inhibited marine cooling and contributed additional downslope warming along the western portions of the region. A notable surface thermally induced trough was evident throughout the event over western Oregon and Washington. An eastward shift of the thermal trough, following the eastward migration of the 500-hPa ridge, allowed an inland surge of cooler marine air and dramatic 24-h cooling, especially along the western periphery of the region. Large-scale horizontal warm-air advection played a minimal role. When compared with past highly amplified ridges over the region, this event was characterized by much higher 500-hPa geopotential heights, a stronger thermal trough, and stronger offshore flow.
Abstract
During the last week of June 2021, the Pacific Northwest region of North America experienced a record-breaking heatwave of historic proportions. All-time high temperature records were shattered, often by several degrees, across many locations, with Canada setting a new national record, the state of Washington setting a new record, and the state of Oregon tying its previous record. Here we diagnose key meteorology that contributed to this heatwave. The event was associated with a highly anomalous midtropospheric ridge, with peak 500-hPa geopotential height anomalies centered over central British Columbia. This ridge developed over several days as part of a large-scale wave train. Back trajectory analysis indicates that synoptic-scale subsidence and associated adiabatic warming played a key role in enhancing the magnitude of the heat to the south of the ridge peak, while diabatic heating was dominant closer to the ridge center. Easterly/offshore flow inhibited marine cooling and contributed additional downslope warming along the western portions of the region. A notable surface thermally induced trough was evident throughout the event over western Oregon and Washington. An eastward shift of the thermal trough, following the eastward migration of the 500-hPa ridge, allowed an inland surge of cooler marine air and dramatic 24-h cooling, especially along the western periphery of the region. Large-scale horizontal warm-air advection played a minimal role. When compared with past highly amplified ridges over the region, this event was characterized by much higher 500-hPa geopotential heights, a stronger thermal trough, and stronger offshore flow.