Search Results

You are looking at 1 - 10 of 12 items for

  • Author or Editor: John G. Williams x
  • Refine by Access: All Content x
Clear All Modify Search
John G. Williams

Abstract

Calculations and measurements show that the transmissivity parameter t is an increasing function of the optical air mass m. An equation for estimating t for several model atmospheres is presented, and the direct beam flux calculated with the estimates is compared with that obtained using a constant value of t. The equation gives significantly better results for hourly or instantaneous values, and for daily totals on some surfaces.

Full access
John G. Williams and Werner H. Terjung

Abstract

Grid-point data on the MSL pressure and 500 mb height fields over western North America are filtered with the eigenvectors of their covariance matrix. The filtering allows virtually all the information in 134 grid-point data for each day of the 8-year sample to be represented by 15 eigenvector coefficients. The three-dimensional eigenvector patterns are meteorologically coherent, and the filtering process can be used as a screen for bad data or for anomalous circulation patterns.

Full access
Rob A. Hall, John M. Huthnance, and Richard G. Williams

Abstract

Reflection of internal waves from sloping topography is simple to predict for uniform stratification and linear slope gradients. However, depth-varying stratification presents the complication that regions of the slope may be subcritical and other regions supercritical. Here, a numerical model is used to simulate a mode-1, M 2 internal tide approaching a shelf slope with both uniform and depth-varying stratifications. The fractions of incident internal wave energy reflected back offshore and transmitted onto the shelf are diagnosed by calculating the energy flux at the base of slope (with and without topography) and at the shelf break. For the stratifications/topographies considered in this study, the fraction of energy reflected for a given slope criticality is similar for both uniform and depth-varying stratifications. This suggests the fraction reflected is dependent only on maximum slope criticality and independent of the depth of the pycnocline. The majority of the reflected energy flux is in mode 1, with only minor contributions from higher modes due to topographic scattering. The fraction of energy transmitted is dependent on the depth-structure of the stratification and cannot be predicted from maximum slope criticality. If near-surface stratification is weak, transmitted internal waves may not reach the shelf break because of decreased horizontal wavelength and group velocity.

Full access
John C. Marshall, Richard G. Williams, and A. J. George Nurser

Abstract

The annual rate at which mixed-layer fluid is transferred into the permanent thermocline—that is, the annual subduction rate S ann and the effective subduction period 𝒯eff—is inferred from climatological data in the North Atlantic. From its kinematic definition, S ann is obtained by summing the vertical velocity at the base of the winter mixed layer with the lateral induction of fluid through the sloping base of the winter mixed layer. Geostrophic velocity fields, computed from the Levitus climatology assuming a level of no motion at 2.5 km, are used; the vertical velocity at the base of the mixed layer is deduced from observed surface Ekman pumping velocities and linear vorticity balance. A plausible pattern of S ann is obtained with subduction rates over the subtropical gyre approaching 100 m/yr—twice the maximum rate of Ekman pumping.

The subduction period 𝒯eff is found by viewing subduction as a transformation process converting mixed-layer fluid into stratified thermocline fluid. The effective period is that period of time during the shallowing of the mixed layer in which sufficient buoyancy is delivered to permit irreversible transfer of fluid into the main thermocline at the rate S ann. Typically 𝒯eff is found to be 1 to 2 months over the major part of the subtropical gyre, rising to 4 months in the tropics.

Finally, the heat budget of a column of fluid, extending from the surface down to the base of the seasonal thermocline is discussed, following it over an annual cycle. We are able to relate the buoyancy delivered to the mixed layer during the subduction period to the annual-mean buoyancy forcing through the sea surface plus the warming due to the convergence of Ekman heat fluxes. The relative importance of surface fluxes (heat and freshwater) and Ekman fluxes in supplying buoyancy to support subduction is examined using the climatologist observations of Isemer and Hasse, Schmitt et al., and Levitus. The pumping down of fluid from the warm summer Ekman layer into the thermocline makes a crucial contribution and, over the subtropical gyre, is the dominant term in the thermodynamics of subduction.

Full access
Richard G. Williams, John C. Marshall, and Michael A. Spall

Abstract

Stommel argued that the seasonal cycle leads to a bias in the coupling between the surface mixed layer and the main thermocline of the ocean. He suggested that a “demon” operated that effectively only allowed fluid at the end of winter to pass from the mixed layer into the main thermocline. In this study, Stommel's hypothesis is examined using diagnostics from a time-dependent coupled mixed layer-primitive equation model of the North Atlantic (CME). The influence of the seasonal cycle on the properties of the main thermocline is investigated using two methods. In the first, the rate and timing of subduction into the main thermocline is diagnosed using kinematic methods from the 1° resolution CME fields. In the second, tracer diagnostics of the CME and idealized experiments using a “date” tracer identifying the timing of subduction are performed. Over the subtropical gyre, both approaches generally support Stommel's hypothesis that fluid is only transferred from the mixed layer into the main thermocline over a short period, ∼1 month, in late winter/early spring. Tracer date experiments are also conducted using the eddy-resolving ⅓° CME fields. Eddy stirring is found to enhance the rate at which the tracer spreads into unventilated regions, but does not alter the seasonal bias of the Stommel demon mechanism.

Full access
Todd P. Lane, Robert D. Sharman, Stanley B. Trier, Robert G. Fovell, and John K. Williams

Anyone who has flown in a commercial aircraft is familiar with turbulence. Unexpected encounters with turbulence pose a safety risk to airline passengers and crew, can occasionally damage aircraft, and indirectly increase the cost of air travel. Deep convective clouds are one of the most important sources of turbulence. Cloud-induced turbulence can occur both within clouds and in the surrounding clear air. Turbulence associated with but outside of clouds is of particular concern because it is more difficult to discern using standard hazard identification technologies (e.g., satellite and radar) and thus is often the source of unexpected turbulence encounters. Although operational guidelines for avoiding near-cloud turbulence exist, they are in many ways inadequate because they were developed before the governing dynamical processes were understood. Recently, there have been significant advances in the understanding of the dynamics of near-cloud turbulence. Using examples, this article demonstrates how these advances have stemmed from improved turbulence observing and reporting systems, the establishment of archives of turbulence encounters, detailed case studies, and high-resolution numerical simulations. Some of the important phenomena that have recently been identified as contributing to near-cloud turbulence include atmospheric wave breaking, unstable upper-level thunderstorm outflows, shearing instabilities, and cirrus cloud bands. The consequences of these phenomena for developing new en route turbulence avoidance guidelines and forecasting methods are discussed, along with outstanding research questions.

Full access
Sid-Ahmed Boukabara, Vladimir Krasnopolsky, Stephen G. Penny, Jebb Q. Stewart, Amy McGovern, David Hall, John E. Ten Hoeve, Jason Hickey, Hung-Lung Allen Huang, John K. Williams, Kayo Ide, Philippe Tissot, Sue Ellen Haupt, Kenneth S. Casey, Nikunj Oza, Alan J. Geer, Eric S. Maddy, and Ross N. Hoffman

Abstract

Promising new opportunities to apply artificial intelligence (AI) to the Earth and environmental sciences are identified, informed by an overview of current efforts in the community. Community input was collected at the first National Oceanic and Atmospheric Administration (NOAA) workshop on “Leveraging AI in the Exploitation of Satellite Earth Observations and Numerical Weather Prediction” held in April 2019. This workshop brought together over 400 scientists, program managers, and leaders from the public, academic, and private sectors in order to enable experts involved in the development and adaptation of AI tools and applications to meet and exchange experiences with NOAA experts. Paths are described to actualize the potential of AI to better exploit the massive volumes of environmental data from satellite and in situ sources that are critical for numerical weather prediction (NWP) and other Earth and environmental science applications. The main lessons communicated from community input via active workshop discussions and polling are reported. Finally, recommendations are presented for both scientists and decision-makers to address some of the challenges facing the adoption of AI across all Earth science.

Open access
T. C. Johns, C. F. Durman, H. T. Banks, M. J. Roberts, A. J. McLaren, J. K. Ridley, C. A. Senior, K. D. Williams, A. Jones, G. J. Rickard, S. Cusack, W. J. Ingram, M. Crucifix, D. M. H. Sexton, M. M. Joshi, B.-W. Dong, H. Spencer, R. S. R. Hill, J. M. Gregory, A. B. Keen, A. K. Pardaens, J. A. Lowe, A. Bodas-Salcedo, S. Stark, and Y. Searl

Abstract

A new coupled general circulation climate model developed at the Met Office's Hadley Centre is presented, and aspects of its performance in climate simulations run for the Intergovernmental Panel on Climate Change Fourth Assessment Report (IPCC AR4) documented with reference to previous models. The Hadley Centre Global Environmental Model version 1 (HadGEM1) is built around a new atmospheric dynamical core; uses higher resolution than the previous Hadley Centre model, HadCM3; and contains several improvements in its formulation including interactive atmospheric aerosols (sulphate, black carbon, biomass burning, and sea salt) plus their direct and indirect effects. The ocean component also has higher resolution and incorporates a sea ice component more advanced than HadCM3 in terms of both dynamics and thermodynamics. HadGEM1 thus permits experiments including some interactive processes not feasible with HadCM3. The simulation of present-day mean climate in HadGEM1 is significantly better overall in comparison to HadCM3, although some deficiencies exist in the simulation of tropical climate and El Niño variability. We quantify the overall improvement using a quasi-objective climate index encompassing a range of atmospheric, oceanic, and sea ice variables. It arises partly from higher resolution but also from greater fidelity in modeling dynamical and physical processes, for example, in the representation of clouds and sea ice. HadGEM1 has a similar effective climate sensitivity (2.8 K) to a CO2 doubling as HadCM3 (3.1 K), although there are significant regional differences in their response patterns, especially in the Tropics. HadGEM1 is anticipated to be used as the basis both for higher-resolution and higher-complexity Earth System studies in the near future.

Full access
David C. Leon, Jeffrey R. French, Sonia Lasher-Trapp, Alan M. Blyth, Steven J. Abel, Susan Ballard, Andrew Barrett, Lindsay J. Bennett, Keith Bower, Barbara Brooks, Phil Brown, Cristina Charlton-Perez, Thomas Choularton, Peter Clark, Chris Collier, Jonathan Crosier, Zhiqiang Cui, Seonaid Dey, David Dufton, Chloe Eagle, Michael J. Flynn, Martin Gallagher, Carol Halliwell, Kirsty Hanley, Lee Hawkness-Smith, Yahui Huang, Graeme Kelly, Malcolm Kitchen, Alexei Korolev, Humphrey Lean, Zixia Liu, John Marsham, Daniel Moser, John Nicol, Emily G. Norton, David Plummer, Jeremy Price, Hugo Ricketts, Nigel Roberts, Phil D. Rosenberg, David Simonin, Jonathan W. Taylor, Robert Warren, Paul I. Williams, and Gillian Young

Abstract

The Convective Precipitation Experiment (COPE) was a joint U.K.–U.S. field campaign held during the summer of 2013 in the southwest peninsula of England, designed to study convective clouds that produce heavy rain leading to flash floods. The clouds form along convergence lines that develop regularly as a result of the topography. Major flash floods have occurred in the past, most famously at Boscastle in 2004. It has been suggested that much of the rain was produced by warm rain processes, similar to some flash floods that have occurred in the United States. The overarching goal of COPE is to improve quantitative convective precipitation forecasting by understanding the interactions of the cloud microphysics and dynamics and thereby to improve numerical weather prediction (NWP) model skill for forecasts of flash floods. Two research aircraft, the University of Wyoming King Air and the U.K. BAe 146, obtained detailed in situ and remote sensing measurements in, around, and below storms on several days. A new fast-scanning X-band dual-polarization Doppler radar made 360° volume scans over 10 elevation angles approximately every 5 min and was augmented by two Met Office C-band radars and the Chilbolton S-band radar. Detailed aerosol measurements were made on the aircraft and on the ground. This paper i) provides an overview of the COPE field campaign and the resulting dataset, ii) presents examples of heavy convective rainfall in clouds containing ice and also in relatively shallow clouds through the warm rain process alone, and iii) explains how COPE data will be used to improve high-resolution NWP models for operational use.

Full access
M. Susan Lozier, Sheldon Bacon, Amy S. Bower, Stuart A. Cunningham, M. Femke de Jong, Laura de Steur, Brad deYoung, Jürgen Fischer, Stefan F. Gary, Blair J. W. Greenan, Patrick Heimbach, Naomi P. Holliday, Loïc Houpert, Mark E. Inall, William E. Johns, Helen L. Johnson, Johannes Karstensen, Feili Li, Xiaopei Lin, Neill Mackay, David P. Marshall, Herlé Mercier, Paul G. Myers, Robert S. Pickart, Helen R. Pillar, Fiammetta Straneo, Virginie Thierry, Robert A. Weller, Richard G. Williams, Chris Wilson, Jiayan Yang, Jian Zhao, and Jan D. Zika

Abstract

For decades oceanographers have understood the Atlantic meridional overturning circulation (AMOC) to be primarily driven by changes in the production of deep-water formation in the subpolar and subarctic North Atlantic. Indeed, current Intergovernmental Panel on Climate Change (IPCC) projections of an AMOC slowdown in the twenty-first century based on climate models are attributed to the inhibition of deep convection in the North Atlantic. However, observational evidence for this linkage has been elusive: there has been no clear demonstration of AMOC variability in response to changes in deep-water formation. The motivation for understanding this linkage is compelling, since the overturning circulation has been shown to sequester heat and anthropogenic carbon in the deep ocean. Furthermore, AMOC variability is expected to impact this sequestration as well as have consequences for regional and global climates through its effect on the poleward transport of warm water. Motivated by the need for a mechanistic understanding of the AMOC, an international community has assembled an observing system, Overturning in the Subpolar North Atlantic Program (OSNAP), to provide a continuous record of the transbasin fluxes of heat, mass, and freshwater, and to link that record to convective activity and water mass transformation at high latitudes. OSNAP, in conjunction with the Rapid Climate Change–Meridional Overturning Circulation and Heatflux Array (RAPID–MOCHA) at 26°N and other observational elements, will provide a comprehensive measure of the three-dimensional AMOC and an understanding of what drives its variability. The OSNAP observing system was fully deployed in the summer of 2014, and the first OSNAP data products are expected in the fall of 2017.

Full access