Search Results

You are looking at 1 - 10 of 11 items for

  • Author or Editor: John K. Williams x
  • All content x
Clear All Modify Search
John K. Williams and J. Vivekanandan

Abstract

Dual-wavelength ratio (DWR) techniques offer the prospect of producing high-resolution mapping of cloud microphysical properties, including retrievals of cloud liquid water content (LWC) from reflectivity measured by millimeter-wavelength radars. Unfortunately, noise and artifacts in the DWR require smoothing to obtain physically realistic values of LWC with a concomitant loss of resolution. Factors that cause inaccuracy in the retrieved LWC include uncertainty in gas and liquid water attenuation coefficients, Mie scattering due to large water droplets or ice particles, corruption of the radar reflectivities by noise and nonatmospheric returns, and artifacts due to mismatched radar illumination volumes. The error analysis presented here consists of both analytic and heuristic arguments; it is illustrated using data from the Mount Washington Icing Sensors Project (MWISP) and from an idealized simulation. In addition to offering insight into design considerations for a DWR system, some results suggest methods that may mitigate some of these sources of error for existing systems and datasets.

Full access
David Ahijevych, James O. Pinto, John K. Williams, and Matthias Steiner

Abstract

A data mining and statistical learning method known as a random forest (RF) is employed to generate 2-h forecasts of the likelihood for initiation of mesoscale convective systems (MCS-I). The RF technique uses an ensemble of decision trees to relate a set of predictors [in this case radar reflectivity, satellite imagery, and numerical weather prediction (NWP) model diagnostics] to a predictand (in this case MCS-I). The RF showed a remarkable ability to detect MCS-I events. Over 99% of the 550 observed MCS-I events were detected to within 50 km. However, this high detection rate came with a tendency to issue false alarms either because of premature warning of an MCS-I event or in the continued elevation of RF forecast likelihoods well after an MCS-I event occurred. The skill of the RF forecasts was found to increase with the number of trees and the fraction of positive events used in the training set. The skill of the RF was also highly dependent on the types of predictor fields included in the training set and was notably better when a more recent training period was used. The RF offers advantages over high-resolution NWP because it can be run in a fraction of the time and can account for nonlinearly varying biases in the model data. In addition, as part of the training process, the RF ranks the importance of each predictor, which can be used to assess the utility of new datasets in the prediction of MCS-I.

Full access
John R. Mecikalski, John K. Williams, Christopher P. Jewett, David Ahijevych, Anita LeRoy, and John R. Walker

Abstract

The Geostationary Operational Environmental Satellite (GOES)-R convective initiation (CI) algorithm predicts CI in real time over the next 0–60 min. While GOES-R CI has been very successful in tracking nascent clouds and obtaining cloud-top growth and height characteristics relevant to CI in an object-tracking framework, its performance has been hindered by elevated false-alarm rates, and it has not optimally combined satellite observations with other valuable data sources. Presented here are two statistical learning approaches that incorporate numerical weather prediction (NWP) input within the established GOES-R CI framework to produce probabilistic forecasts: logistic regression (LR) and an artificial-intelligence approach known as random forest (RF). Both of these techniques are used to build models that are based on an extensive database of CI events and nonevents and are evaluated via cross validation and on independent case studies. With the proper choice of probability thresholds, both the LR and RF techniques incorporating NWP data produce substantially fewer false alarms than when only GOES data are used. The NWP information identifies environmental conditions (as favorable or unfavorable) for the development of convective storms and improves the skill of the CI nowcasts that operate on GOES-based cloud objects, as compared with when the satellite IR fields are used alone. The LR procedure performs slightly better overall when 14 skill measures are used to quantify the results and notably better on independent case study days.

Full access
Todd P. Lane, Robert D. Sharman, Stanley B. Trier, Robert G. Fovell, and John K. Williams

Anyone who has flown in a commercial aircraft is familiar with turbulence. Unexpected encounters with turbulence pose a safety risk to airline passengers and crew, can occasionally damage aircraft, and indirectly increase the cost of air travel. Deep convective clouds are one of the most important sources of turbulence. Cloud-induced turbulence can occur both within clouds and in the surrounding clear air. Turbulence associated with but outside of clouds is of particular concern because it is more difficult to discern using standard hazard identification technologies (e.g., satellite and radar) and thus is often the source of unexpected turbulence encounters. Although operational guidelines for avoiding near-cloud turbulence exist, they are in many ways inadequate because they were developed before the governing dynamical processes were understood. Recently, there have been significant advances in the understanding of the dynamics of near-cloud turbulence. Using examples, this article demonstrates how these advances have stemmed from improved turbulence observing and reporting systems, the establishment of archives of turbulence encounters, detailed case studies, and high-resolution numerical simulations. Some of the important phenomena that have recently been identified as contributing to near-cloud turbulence include atmospheric wave breaking, unstable upper-level thunderstorm outflows, shearing instabilities, and cirrus cloud bands. The consequences of these phenomena for developing new en route turbulence avoidance guidelines and forecasting methods are discussed, along with outstanding research questions.

Full access
David John Gagne II, Amy McGovern, Sue Ellen Haupt, Ryan A. Sobash, John K. Williams, and Ming Xue

Abstract

Forecasting severe hail accurately requires predicting how well atmospheric conditions support the development of thunderstorms, the growth of large hail, and the minimal loss of hail mass to melting before reaching the surface. Existing hail forecasting techniques incorporate information about these processes from proximity soundings and numerical weather prediction models, but they make many simplifying assumptions, are sensitive to differences in numerical model configuration, and are often not calibrated to observations. In this paper a storm-based probabilistic machine learning hail forecasting method is developed to overcome the deficiencies of existing methods. An object identification and tracking algorithm locates potential hailstorms in convection-allowing model output and gridded radar data. Forecast storms are matched with observed storms to determine hail occurrence and the parameters of the radar-estimated hail size distribution. The database of forecast storms contains information about storm properties and the conditions of the prestorm environment. Machine learning models are used to synthesize that information to predict the probability of a storm producing hail and the radar-estimated hail size distribution parameters for each forecast storm. Forecasts from the machine learning models are produced using two convection-allowing ensemble systems and the results are compared to other hail forecasting methods. The machine learning forecasts have a higher critical success index (CSI) at most probability thresholds and greater reliability for predicting both severe and significant hail.

Full access
Amy McGovern, Kimberly L. Elmore, David John Gagne II, Sue Ellen Haupt, Christopher D. Karstens, Ryan Lagerquist, Travis Smith, and John K. Williams

Abstract

High-impact weather events, such as severe thunderstorms, tornadoes, and hurricanes, cause significant disruptions to infrastructure, property loss, and even fatalities. High-impact events can also positively impact society, such as the impact on savings through renewable energy. Prediction of these events has improved substantially with greater observational capabilities, increased computing power, and better model physics, but there is still significant room for improvement. Artificial intelligence (AI) and data science technologies, specifically machine learning and data mining, bridge the gap between numerical model prediction and real-time guidance by improving accuracy. AI techniques also extract otherwise unavailable information from forecast models by fusing model output with observations to provide additional decision support for forecasters and users. In this work, we demonstrate that applying AI techniques along with a physical understanding of the environment can significantly improve the prediction skill for multiple types of high-impact weather. The AI approach is also a contribution to the growing field of computational sustainability. The authors specifically discuss the prediction of storm duration, severe wind, severe hail, precipitation classification, forecasting for renewable energy, and aviation turbulence. They also discuss how AI techniques can process “big data,” provide insights into high-impact weather phenomena, and improve our understanding of high-impact weather.

Open access
Sean P. Burns, Noah P. Molotch, Mark W. Williams, John F. Knowles, Brian Seok, Russell K. Monson, Andrew A. Turnipseed, and Peter D. Blanken

Abstract

Snowpack temperatures from a subalpine forest below Niwot Ridge, Colorado, are examined with respect to atmospheric conditions and the 30-min above-canopy and subcanopy eddy covariance fluxes of sensible Q h and latent Q e heat. In the lower snowpack, daily snow temperature changes greater than 1°C day−1 occurred about 1–2 times in late winter and early spring, which resulted in transitions to and from an isothermal snowpack. Though air temperature was a primary control on snowpack temperature, rapid snowpack warm-up events were sometimes preceded by strong downslope winds that kept the nighttime air (and canopy) temperature above freezing, thus increasing sensible heat and longwave radiative transfer from the canopy to the snowpack. There was an indication that water vapor condensation on the snow surface intensified the snowpack warm-up.

In late winter, subcanopy Q h was typically between −10 and 10 W m−2 and rarely had a magnitude larger than 20 W m−2. The direction of subcanopy Q h was closely related to the canopy temperature and only weakly dependent on the time of day. The daytime subcanopy Q h monthly frequency distribution was near normal, whereas the nighttime distribution was more peaked near zero with a large positive skewness. In contrast, above-canopy Q h was larger in magnitude (100–400 W m−2) and primarily warmed the forest–surface at night and cooled it during the day. Around midday, decoupling of subcanopy and above-canopy air led to an apparent cooling of the snow surface by sensible heat. Sources of uncertainty in the subcanopy eddy covariance flux measurements are suggested. Implications of the observed snowpack temperature changes for future climates are discussed.

Full access
James Edson, Timothy Crawford, Jerry Crescenti, Tom Farrar, Nelson Frew, Greg Gerbi, Costas Helmis, Tihomir Hristov, Djamal Khelif, Andrew Jessup, Haf Jonsson, Ming Li, Larry Mahrt, Wade McGillis, Albert Plueddemann, Lian Shen, Eric Skyllingstad, Tim Stanton, Peter Sullivan, Jielun Sun, John Trowbridge, Dean Vickers, Shouping Wang, Qing Wang, Robert Weller, John Wilkin, Albert J. Williams III, D. K. P. Yue, and Chris Zappa

The Office of Naval Research's Coupled Boundary Layers and Air–Sea Transfer (CBLAST) program is being conducted to investigate the processes that couple the marine boundary layers and govern the exchange of heat, mass, and momentum across the air–sea interface. CBLAST-LOW was designed to investigate these processes at the low-wind extreme where the processes are often driven or strongly modulated by buoyant forcing. The focus was on conditions ranging from negligible wind stress, where buoyant forcing dominates, up to wind speeds where wave breaking and Langmuir circulations play a significant role in the exchange processes. The field program provided observations from a suite of platforms deployed in the coastal ocean south of Martha's Vineyard. Highlights from the measurement campaigns include direct measurement of the momentum and heat fluxes on both sides of the air–sea interface using a specially constructed Air–Sea Interaction Tower (ASIT), and quantification of regional oceanic variability over scales of O(1–104 mm) using a mesoscale mooring array, aircraft-borne remote sensors, drifters, and ship surveys. To our knowledge, the former represents the first successful attempt to directly and simultaneously measure the heat and momentum exchange on both sides of the air–sea interface. The latter provided a 3D picture of the oceanic boundary layer during the month-long main experiment. These observations have been combined with numerical models and direct numerical and large-eddy simulations to investigate the processes that couple the atmosphere and ocean under these conditions. For example, the oceanic measurements have been used in the Regional Ocean Modeling System (ROMS) to investigate the 3D evolution of regional ocean thermal stratification. The ultimate goal of these investigations is to incorporate improved parameterizations of these processes in coupled models such as the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS) to improve marine forecasts of wind, waves, and currents.

Full access
Sid-Ahmed Boukabara, Vladimir Krasnopolsky, Stephen G. Penny, Jebb Q. Stewart, Amy McGovern, David Hall, John E. Ten Hoeve, Jason Hickey, Hung-Lung Allen Huang, John K. Williams, Kayo Ide, Philippe Tissot, Sue Ellen Haupt, Kenneth S. Casey, Nikunj Oza, Alan J. Geer, Eric S. Maddy, and Ross N. Hoffman

Abstract

Promising new opportunities to apply artificial intelligence (AI) to the Earth and environmental sciences are identified, informed by an overview of current efforts in the community. Community input was collected at the first National Oceanic and Atmospheric Administration (NOAA) workshop on “Leveraging AI in the Exploitation of Satellite Earth Observations and Numerical Weather Prediction” held in April 2019. This workshop brought together over 400 scientists, program managers, and leaders from the public, academic, and private sectors in order to enable experts involved in the development and adaptation of AI tools and applications to meet and exchange experiences with NOAA experts. Paths are described to actualize the potential of AI to better exploit the massive volumes of environmental data from satellite and in situ sources that are critical for numerical weather prediction (NWP) and other Earth and environmental science applications. The main lessons communicated from community input via active workshop discussions and polling are reported. Finally, recommendations are presented for both scientists and decision-makers to address some of the challenges facing the adoption of AI across all Earth science.

Open access
T. C. Johns, C. F. Durman, H. T. Banks, M. J. Roberts, A. J. McLaren, J. K. Ridley, C. A. Senior, K. D. Williams, A. Jones, G. J. Rickard, S. Cusack, W. J. Ingram, M. Crucifix, D. M. H. Sexton, M. M. Joshi, B.-W. Dong, H. Spencer, R. S. R. Hill, J. M. Gregory, A. B. Keen, A. K. Pardaens, J. A. Lowe, A. Bodas-Salcedo, S. Stark, and Y. Searl

Abstract

A new coupled general circulation climate model developed at the Met Office's Hadley Centre is presented, and aspects of its performance in climate simulations run for the Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC AR4) documented with reference to previous models. The Hadley Centre Global Environmental Model version 1 (HadGEM1) is built around a new atmospheric dynamical core; uses higher resolution than the previous Hadley Centre model, HadCM3; and contains several improvements in its formulation including interactive atmospheric aerosols (sulphate, black carbon, biomass burning, and sea salt) plus their direct and indirect effects. The ocean component also has higher resolution and incorporates a sea ice component more advanced than HadCM3 in terms of both dynamics and thermodynamics. HadGEM1 thus permits experiments including some interactive processes not feasible with HadCM3. The simulation of present-day mean climate in HadGEM1 is significantly better overall in comparison to HadCM3, although some deficiencies exist in the simulation of tropical climate and El Niño variability. We quantify the overall improvement using a quasi-objective climate index encompassing a range of atmospheric, oceanic, and sea ice variables. It arises partly from higher resolution but also from greater fidelity in modeling dynamical and physical processes, for example, in the representation of clouds and sea ice. HadGEM1 has a similar effective climate sensitivity (2.8 K) to a CO2 doubling as HadCM3 (3.1 K), although there are significant regional differences in their response patterns, especially in the Tropics. HadGEM1 is anticipated to be used as the basis both for higher-resolution and higher-complexity Earth System studies in the near future.

Full access