Search Results
You are looking at 21 - 30 of 30 items for
- Author or Editor: John W. Nielsen-Gammon x
- Refine by Access: All Content x
Abstract
This study explores the extent to which potential vorticity (PV) generation and superposition were relevant on a variety of scales during the genesis of Tropical Storm Allison. Allison formed close to shore, and the combination of continuous Doppler radar, satellite, aircraft, and surface observations allows for the examination of tropical cyclogenesis in great detail.
Preceding Allison’s genesis, PV superposition on the large scale created an environment where decreased vertical shear and increased instability, surface fluxes, and low-level cyclonic vorticity coexisted. This presented a favorable environment for meso-α-scale PV production by widespread convection and led to the formation of surface-based, meso-β-scale vortices [termed convective burst vortices (CBVs)]. The CBVs seemed to form in association with intense bursts of convection and rotated around each other within the meso-α circulation field. One CBV eventually superposed with a mesoscale convective vortex (MCV), resulting in a more concentrated surface vortex with stronger pressure gradients.
The unstable, vorticity-rich environment was also favorable for the development of even smaller, meso-γ-scale vortices that formed within the cores of deep convective cells. Several meso-γ-scale convective vortices were present in the immediate vicinity when a CBV developed, and the smaller vortices may have contributed to the formation of the CBV. The convection associated with the meso-γ vortices also fed PV into existing CBVs.
Much of the vortex behavior observed in Allison has been documented or simulated in studies of other tropical cyclones. Multiscale vortex formation and interaction may be a common aspect of many tropical cyclogenesis events.
Abstract
This study explores the extent to which potential vorticity (PV) generation and superposition were relevant on a variety of scales during the genesis of Tropical Storm Allison. Allison formed close to shore, and the combination of continuous Doppler radar, satellite, aircraft, and surface observations allows for the examination of tropical cyclogenesis in great detail.
Preceding Allison’s genesis, PV superposition on the large scale created an environment where decreased vertical shear and increased instability, surface fluxes, and low-level cyclonic vorticity coexisted. This presented a favorable environment for meso-α-scale PV production by widespread convection and led to the formation of surface-based, meso-β-scale vortices [termed convective burst vortices (CBVs)]. The CBVs seemed to form in association with intense bursts of convection and rotated around each other within the meso-α circulation field. One CBV eventually superposed with a mesoscale convective vortex (MCV), resulting in a more concentrated surface vortex with stronger pressure gradients.
The unstable, vorticity-rich environment was also favorable for the development of even smaller, meso-γ-scale vortices that formed within the cores of deep convective cells. Several meso-γ-scale convective vortices were present in the immediate vicinity when a CBV developed, and the smaller vortices may have contributed to the formation of the CBV. The convection associated with the meso-γ vortices also fed PV into existing CBVs.
Much of the vortex behavior observed in Allison has been documented or simulated in studies of other tropical cyclones. Multiscale vortex formation and interaction may be a common aspect of many tropical cyclogenesis events.
Abstract
Accurate depiction of meteorological conditions, especially within the planetary boundary layer (PBL), is important for air pollution modeling, and PBL parameterization schemes play a critical role in simulating the boundary layer. This study examines the sensitivity of the performance of the Weather Research and Forecast (WRF) model to the use of three different PBL schemes [Mellor–Yamada–Janjic (MYJ), Yonsei University (YSU), and the asymmetric convective model, version 2 (ACM2)]. Comparison of surface and boundary layer observations with 92 sets of daily, 36-h high-resolution WRF simulations with different schemes over Texas in July–September 2005 shows that the simulations with the YSU and ACM2 schemes give much less bias than with the MYJ scheme. Simulations with the MYJ scheme, the only local closure scheme of the three, produced the coldest and moistest biases in the PBL. The differences among the schemes are found to be due predominantly to differences in vertical mixing strength and entrainment of air from above the PBL. A sensitivity experiment with the ACM2 scheme confirms this diagnosis.
Abstract
Accurate depiction of meteorological conditions, especially within the planetary boundary layer (PBL), is important for air pollution modeling, and PBL parameterization schemes play a critical role in simulating the boundary layer. This study examines the sensitivity of the performance of the Weather Research and Forecast (WRF) model to the use of three different PBL schemes [Mellor–Yamada–Janjic (MYJ), Yonsei University (YSU), and the asymmetric convective model, version 2 (ACM2)]. Comparison of surface and boundary layer observations with 92 sets of daily, 36-h high-resolution WRF simulations with different schemes over Texas in July–September 2005 shows that the simulations with the YSU and ACM2 schemes give much less bias than with the MYJ scheme. Simulations with the MYJ scheme, the only local closure scheme of the three, produced the coldest and moistest biases in the PBL. The differences among the schemes are found to be due predominantly to differences in vertical mixing strength and entrainment of air from above the PBL. A sensitivity experiment with the ACM2 scheme confirms this diagnosis.
Abstract
The process of tropopause folding is studied in the context of the life cycle of baroclinic waves. Previous studies of upper-level frontogenesis have emphasized the role of the vertical circulation in driving stratospheric air down into the midtroposphere. Here, a potential vorticity–based approach is adopted that focuses on the generation of a folded tropopause. To facilitate comparison of the two approaches, the diagnosis is applied to the upper-level front previously simulated and studied by Rotunno et al. The potential vorticity approach clarifies the primary role played by the horizontal nondivergent wind in producing a fold and explains why folding should be a common aspect of baroclinic development.
Between the trough and upstream ridge, prolonged subsidence within a region of weak system-relative flow generates a tropopause depression oriented at an angle to the large-scale flow. The large-scale vertical shear then locally increases the slope of the tropopause, eventually leading to a tropopause fold. In contrast, tropopause folding in the base of the trough is caused by the nondivergent cyclonic circulation associated with the surface thermal wave. The winds associated with the thermal wave amplify the potential vorticity wave aloft, and these winds, which decrease with height, rapidly generate a tropopause fold within the trough.
Abstract
The process of tropopause folding is studied in the context of the life cycle of baroclinic waves. Previous studies of upper-level frontogenesis have emphasized the role of the vertical circulation in driving stratospheric air down into the midtroposphere. Here, a potential vorticity–based approach is adopted that focuses on the generation of a folded tropopause. To facilitate comparison of the two approaches, the diagnosis is applied to the upper-level front previously simulated and studied by Rotunno et al. The potential vorticity approach clarifies the primary role played by the horizontal nondivergent wind in producing a fold and explains why folding should be a common aspect of baroclinic development.
Between the trough and upstream ridge, prolonged subsidence within a region of weak system-relative flow generates a tropopause depression oriented at an angle to the large-scale flow. The large-scale vertical shear then locally increases the slope of the tropopause, eventually leading to a tropopause fold. In contrast, tropopause folding in the base of the trough is caused by the nondivergent cyclonic circulation associated with the surface thermal wave. The winds associated with the thermal wave amplify the potential vorticity wave aloft, and these winds, which decrease with height, rapidly generate a tropopause fold within the trough.
Education
What Does It Take to Get into Graduate School? A Survey of Atmospheric Science Programs
Abstract
The AMS's Board on Higher Education undertook a survey of atmospheric science graduate programs in the United States and Canada during the fall and winter of 2007–08. The survey involved admission data for the three previous years and was performed with assistance from AMS headquarters and in cooperation with the University Corporation for Atmospheric Research (UCAR). Usable responses were received from 29 programs, including most major atmospheric science programs.
The responding schools receive between 6 and 140 applications per year, and typical incoming class sizes range from 1 to 24. About 69% of applicants and 76% of enrollees are domestic students. At the majority of schools, all incoming students receive full financial support.
The average graduate program looks for undergraduate grade point averages of at least 3.3 to 3.5, higher for nonscience majors. Grade point averages in math and science courses, typically 3.5 or better, are particularly important. The typical midclass GRE of entering graduate students was a combined verbal and quantitative score of 1,300. Larger schools tend to place particular emphasis on math/ science grades and letters of recommendation, while smaller schools typically value a broader range of application characteristics.
Students considering graduate school should make a special effort to cultivate potential letter writers, working on research projects if possible. They should also become informed about the particular requirements and values of the programs to which they are applying by visiting them if possible or by contacting professors with active research programs in the student's area of interest.
Abstract
The AMS's Board on Higher Education undertook a survey of atmospheric science graduate programs in the United States and Canada during the fall and winter of 2007–08. The survey involved admission data for the three previous years and was performed with assistance from AMS headquarters and in cooperation with the University Corporation for Atmospheric Research (UCAR). Usable responses were received from 29 programs, including most major atmospheric science programs.
The responding schools receive between 6 and 140 applications per year, and typical incoming class sizes range from 1 to 24. About 69% of applicants and 76% of enrollees are domestic students. At the majority of schools, all incoming students receive full financial support.
The average graduate program looks for undergraduate grade point averages of at least 3.3 to 3.5, higher for nonscience majors. Grade point averages in math and science courses, typically 3.5 or better, are particularly important. The typical midclass GRE of entering graduate students was a combined verbal and quantitative score of 1,300. Larger schools tend to place particular emphasis on math/ science grades and letters of recommendation, while smaller schools typically value a broader range of application characteristics.
Students considering graduate school should make a special effort to cultivate potential letter writers, working on research projects if possible. They should also become informed about the particular requirements and values of the programs to which they are applying by visiting them if possible or by contacting professors with active research programs in the student's area of interest.
Abstract
No Abstract available.
Abstract
No Abstract available.
A lightning climatology within 50 km of nine outdoor venue locations for the 1996 Summer Olympics has been produced. Spatial and temporal patterns were analyzed for July and August from 1986 through 1993. Unusually active and inactive lightning days were isolated, and thermodynamic variables examined. At the inland sites, no pattern was found in the spatial distribution of cloud-to-ground lightning; that is, the lightning locations were random. At the one coastal site, Savannah, an inland maximum in ground flash density was observed. Although there was great day-to-day variability, there was a diurnal progression of lightning with a broad minimum from 0600 to 1400 UTC and a sharp maximum near 2200 UTC.
Composite synoptic charts were produced for eight selected active days and eight selected inactive days. At the 500-hPa level the composite dewpoint depression in central Georgia was approximately 8°C less on active days than on inactive days. At the 850-hPa level the vector-averaged wind fields on active days revealed weakly anticyclonic southwesterly flow throughout Georgia. On inactive days, the vector-averaged winds exhibited a large anticyclone centered in northern Georgia.
Some correlation was found between cloud-to-ground lightning activity and several of the thermodynamic variables. The most highly correlated was a form of convective available potential energy with a correlation coefficient of 0.70. The Showalter stability index and K index had correlation coefficients of 0.60 and 0.56, respectively.
Logistic regression equations were developed to forecast active and inactive lightning days from thermodynamic variables and persistence. Days of unusually low lightning activity were more accurately identified through logistic regression than days of unusually high lightning activity. To aid in forecasting lightning days, the historical probability of active or inactive lightning days is provided as a function of the logistic model output.
A lightning climatology within 50 km of nine outdoor venue locations for the 1996 Summer Olympics has been produced. Spatial and temporal patterns were analyzed for July and August from 1986 through 1993. Unusually active and inactive lightning days were isolated, and thermodynamic variables examined. At the inland sites, no pattern was found in the spatial distribution of cloud-to-ground lightning; that is, the lightning locations were random. At the one coastal site, Savannah, an inland maximum in ground flash density was observed. Although there was great day-to-day variability, there was a diurnal progression of lightning with a broad minimum from 0600 to 1400 UTC and a sharp maximum near 2200 UTC.
Composite synoptic charts were produced for eight selected active days and eight selected inactive days. At the 500-hPa level the composite dewpoint depression in central Georgia was approximately 8°C less on active days than on inactive days. At the 850-hPa level the vector-averaged wind fields on active days revealed weakly anticyclonic southwesterly flow throughout Georgia. On inactive days, the vector-averaged winds exhibited a large anticyclone centered in northern Georgia.
Some correlation was found between cloud-to-ground lightning activity and several of the thermodynamic variables. The most highly correlated was a form of convective available potential energy with a correlation coefficient of 0.70. The Showalter stability index and K index had correlation coefficients of 0.60 and 0.56, respectively.
Logistic regression equations were developed to forecast active and inactive lightning days from thermodynamic variables and persistence. Days of unusually low lightning activity were more accurately identified through logistic regression than days of unusually high lightning activity. To aid in forecasting lightning days, the historical probability of active or inactive lightning days is provided as a function of the logistic model output.
Abstract
A mesoscale model is used to investigate the mesoscale predictability of an extreme precipitation event over central Texas on 29 June 2002 that lasted through 7 July 2002. Both the intrinsic and practical aspects of warm-season predictability, especially quantitative precipitation forecasts up to 36 h, were explored through experiments with various grid resolutions, initial and boundary conditions, physics parameterization schemes, and the addition of small-scale, small-amplitude random initial errors. It is found that the high-resolution convective-resolving simulations (with grid spacing down to 3.3 km) do not produce the best simulation or forecast. It was also found that both the realistic initial condition uncertainty and model errors can result in large forecast errors for this warm-season flooding event. Thus, practically, there is room to gain higher forecast accuracy through improving the initial analysis with better data assimilation techniques or enhanced observations, and through improving the forecast model with better-resolved or -parameterized physical processes. However, even if a perfect forecast model is used, small-scale, small-amplitude initial errors, such as those in the form of undetectable random noise, can grow rapidly and subsequently contaminate the short-term deterministic mesoscale forecast within 36 h. This rapid error growth is caused by moist convection. The limited deterministic predictability of such a heavy precipitation event, both practically and intrinsically, illustrates the need for probabilistic forecasts at the mesoscales.
Abstract
A mesoscale model is used to investigate the mesoscale predictability of an extreme precipitation event over central Texas on 29 June 2002 that lasted through 7 July 2002. Both the intrinsic and practical aspects of warm-season predictability, especially quantitative precipitation forecasts up to 36 h, were explored through experiments with various grid resolutions, initial and boundary conditions, physics parameterization schemes, and the addition of small-scale, small-amplitude random initial errors. It is found that the high-resolution convective-resolving simulations (with grid spacing down to 3.3 km) do not produce the best simulation or forecast. It was also found that both the realistic initial condition uncertainty and model errors can result in large forecast errors for this warm-season flooding event. Thus, practically, there is room to gain higher forecast accuracy through improving the initial analysis with better data assimilation techniques or enhanced observations, and through improving the forecast model with better-resolved or -parameterized physical processes. However, even if a perfect forecast model is used, small-scale, small-amplitude initial errors, such as those in the form of undetectable random noise, can grow rapidly and subsequently contaminate the short-term deterministic mesoscale forecast within 36 h. This rapid error growth is caused by moist convection. The limited deterministic predictability of such a heavy precipitation event, both practically and intrinsically, illustrates the need for probabilistic forecasts at the mesoscales.
Abstract
Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from the observations, and thus can be used to estimate model parameters while they adjust the model state. Such parameters should be identifiable, meaning that they must have a detectible impact on observable aspects of the model behavior, their individual impacts should be a monotonic function of the parameter values, and the various impacts should be clearly distinguishable from each other.
A sensitivity analysis is conducted for the parameters within the Asymmetrical Convective Model, version 2 (ACM2) planetary boundary layer (PBL) scheme in the Weather Research and Forecasting model in order to determine the parameters most suited for estimation. A total of 10 candidate parameters are selected from what is, in general, an infinite number of parameters, most being implicit or hidden. Multiple sets of model simulations are performed to test the sensitivity of the simulations to these 10 particular ACM2 parameters within their plausible physical bounds. The most identifiable parameters are found to govern the vertical profile of local mixing within the unstable PBL, the minimum allowable diffusivity, the definition of the height of the unstable PBL, and the Richardson number criterion used to determine the onset of turbulent mixing in stable stratification. Differences in observability imply that the specific choice of parameters to be estimated should depend upon the characteristics of the observations being assimilated.
Abstract
Meteorological model errors caused by imperfect parameterizations generally cannot be overcome simply by optimizing initial and boundary conditions. However, advanced data assimilation methods are capable of extracting significant information about parameterization behavior from the observations, and thus can be used to estimate model parameters while they adjust the model state. Such parameters should be identifiable, meaning that they must have a detectible impact on observable aspects of the model behavior, their individual impacts should be a monotonic function of the parameter values, and the various impacts should be clearly distinguishable from each other.
A sensitivity analysis is conducted for the parameters within the Asymmetrical Convective Model, version 2 (ACM2) planetary boundary layer (PBL) scheme in the Weather Research and Forecasting model in order to determine the parameters most suited for estimation. A total of 10 candidate parameters are selected from what is, in general, an infinite number of parameters, most being implicit or hidden. Multiple sets of model simulations are performed to test the sensitivity of the simulations to these 10 particular ACM2 parameters within their plausible physical bounds. The most identifiable parameters are found to govern the vertical profile of local mixing within the unstable PBL, the minimum allowable diffusivity, the definition of the height of the unstable PBL, and the Richardson number criterion used to determine the onset of turbulent mixing in stable stratification. Differences in observability imply that the specific choice of parameters to be estimated should depend upon the characteristics of the observations being assimilated.
Abstract
An airborne microwave temperature profiler (MTP) was deployed during the Texas 2000 Air Quality Study (TexAQS-2000) to make measurements of boundary layer thermal structure. An objective technique was developed and tested for estimating the mixed layer (ML) height from the MTP vertical temperature profiles. The technique identifies the ML height as a threshold increase of potential temperature from its minimum value within the boundary layer. To calibrate the technique and evaluate the usefulness of this approach, coincident estimates from radiosondes, radar wind profilers, an aerosol backscatter lidar, and in situ aircraft measurements were compared with each other and with the MTP. Relative biases among all instruments were generally less than 50 m, and the agreement between MTP ML height estimates and other estimates was at least as good as the agreement among the other estimates. The ML height estimates from the MTP and other instruments are utilized to determine the spatial and temporal evolution of ML height in the Houston, Texas, area on 1 September 2000. An elevated temperature inversion was present, so ML growth was inhibited until early afternoon. In the afternoon, large spatial variations in ML height developed across the Houston area. The highest ML heights, well over 2 km, were observed to the north of Houston, while downwind of Galveston Bay and within the late afternoon sea breeze ML heights were much lower. The spatial variations that were found away from the immediate influence of coastal circulations were unexpected, and multiple independent ML height estimates were essential for documenting this feature.
Abstract
An airborne microwave temperature profiler (MTP) was deployed during the Texas 2000 Air Quality Study (TexAQS-2000) to make measurements of boundary layer thermal structure. An objective technique was developed and tested for estimating the mixed layer (ML) height from the MTP vertical temperature profiles. The technique identifies the ML height as a threshold increase of potential temperature from its minimum value within the boundary layer. To calibrate the technique and evaluate the usefulness of this approach, coincident estimates from radiosondes, radar wind profilers, an aerosol backscatter lidar, and in situ aircraft measurements were compared with each other and with the MTP. Relative biases among all instruments were generally less than 50 m, and the agreement between MTP ML height estimates and other estimates was at least as good as the agreement among the other estimates. The ML height estimates from the MTP and other instruments are utilized to determine the spatial and temporal evolution of ML height in the Houston, Texas, area on 1 September 2000. An elevated temperature inversion was present, so ML growth was inhibited until early afternoon. In the afternoon, large spatial variations in ML height developed across the Houston area. The highest ML heights, well over 2 km, were observed to the north of Houston, while downwind of Galveston Bay and within the late afternoon sea breeze ML heights were much lower. The spatial variations that were found away from the immediate influence of coastal circulations were unexpected, and multiple independent ML height estimates were essential for documenting this feature.
Abstract
The record-setting 2011 Texas drought/heat wave is examined to identify physical processes, underlying causes, and predictability. October 2010–September 2011 was Texas’s driest 12-month period on record. While the summer 2011 heat wave magnitude (2.9°C above the 1981–2010 mean) was larger than the previous record, events of similar or larger magnitude appear in preindustrial control runs of climate models. The principal factor contributing to the heat wave magnitude was a severe rainfall deficit during antecedent and concurrent seasons related to anomalous sea surface temperatures (SSTs) that included a La Niña event. Virtually all the precipitation deficits appear to be due to natural variability. About 0.6°C warming relative to the 1981–2010 mean is estimated to be attributable to human-induced climate change, with warming observed mainly in the past decade. Quantitative attribution of the overall human-induced contribution since preindustrial times is complicated by the lack of a detected century-scale temperature trend over Texas. Multiple factors altered the probability of climate extremes over Texas in 2011. Observed SST conditions increased the frequency of severe rainfall deficit events from 9% to 34% relative to 1981–2010, while anthropogenic forcing did not appreciably alter their frequency. Human-induced climate change increased the probability of a new temperature record from 3% during the 1981–2010 reference period to 6% in 2011, while the 2011 SSTs increased the probability from 4% to 23%. Forecasts initialized in May 2011 demonstrate predictive skill in anticipating much of the SST-enhanced risk for an extreme summer drought/heat wave over Texas.
Abstract
The record-setting 2011 Texas drought/heat wave is examined to identify physical processes, underlying causes, and predictability. October 2010–September 2011 was Texas’s driest 12-month period on record. While the summer 2011 heat wave magnitude (2.9°C above the 1981–2010 mean) was larger than the previous record, events of similar or larger magnitude appear in preindustrial control runs of climate models. The principal factor contributing to the heat wave magnitude was a severe rainfall deficit during antecedent and concurrent seasons related to anomalous sea surface temperatures (SSTs) that included a La Niña event. Virtually all the precipitation deficits appear to be due to natural variability. About 0.6°C warming relative to the 1981–2010 mean is estimated to be attributable to human-induced climate change, with warming observed mainly in the past decade. Quantitative attribution of the overall human-induced contribution since preindustrial times is complicated by the lack of a detected century-scale temperature trend over Texas. Multiple factors altered the probability of climate extremes over Texas in 2011. Observed SST conditions increased the frequency of severe rainfall deficit events from 9% to 34% relative to 1981–2010, while anthropogenic forcing did not appreciably alter their frequency. Human-induced climate change increased the probability of a new temperature record from 3% during the 1981–2010 reference period to 6% in 2011, while the 2011 SSTs increased the probability from 4% to 23%. Forecasts initialized in May 2011 demonstrate predictive skill in anticipating much of the SST-enhanced risk for an extreme summer drought/heat wave over Texas.