The field of atmospheric science has been enhanced by its long-standing collaboration with entities with specific needs. This chapter and the two subsequent ones describe how applications have worked to advance the science at the same time that the science has served the needs of society. This chapter briefly reviews the synergy between the applications and advancing the science. It specifically describes progress in weather modification, aviation weather, and applications for security. Each of these applications has resulted in enhanced understanding of the physics and dynamics of the atmosphere, new and improved observing equipment, better models, and a push for greater computing power.
This American Meteorological Society (AMS) 100th-anniversary monograph reviews much of the progress in many disciplines of atmospheric science. Basic research feeds applications. But the applications themselves also demand additional research balanced on the cutting edge of the application and the science on which it is based. The purpose of these chapters on progress in applied meteorology is to report on how that process has progressed as the need has arisen to solve very specific problems and to serve particular sectors. These problems range from getting precipitation in the right places at the right time, to meeting the needs of very specific sectors (including energy, security, surface transportation, wildland fire management, agriculture, and more), through the development of new areas of focus of meteorology (such as space weather and fire weather) and employing new tools (such as artificial intelligence) to approach our problems.
Walter Orr Roberts, the first director of the National Center for Atmospheric Research (NCAR) stated, “I have a very strong feeling that science exists to serve human betterment and improve human welfare” (NCAR 2018). To that end, atmospheric scientists have worked closely with the users who have the need. Both sides have learned to listen to each other and learn how to devise the best ways to create user decision-support tools. In many cases, just having a standard weather forecast is insufficient for the need. It must instead be tailored to the right format and translated to the appropriate terminology to be usable. These partnerships with end users have taught us much about meteorology and about transdisciplinary work.
In this chapter we begin with some of the most basic and obviously useful applications of meteorology. Humankind has always wanted to control the weather. That is not easy, however, particularly when we do not understand all of the underlying physical processes. Section 2 reviews how progress in weather modification has progressed hand in hand with scientific understanding. The following two sections deal with two applications that are implicit parts of twentieth- and twenty-first-century progress and that could not function without meteorological knowledge—aviation (section 3) and national security (section 4). To limit the scope, both of these sections focus on advances in the United States, which did lead the world in these arenas for some time; more recently, however, parallel accomplishments have been transpiring in the rest of the world as well. Section 5 summarizes these three applications and how they have pushed the science forward.
This chapter should whet the reader’s appetite for more to follow as the subsequent chapters deal with meteorological solutions to problems that have been created by the burgeoning human population. The topics discussed therein include applications in urban meteorology, air pollution management, surface transportation, and energy. This series of application topics winds up with additional important applications, including for agriculture and food security, space weather, applications of artificial intelligence, and the use of meteorology in managing wildland fires.
2. History and future of weather modification
a. Introduction to weather modification
In the broadest sense, weather modification refers to alteration of the weather or climate of a region as a result of human activities, intentional or unintentional. This section of the monograph reviews only the history of deliberate attempts, based on underlying scientific hypotheses, to alter natural processes occurring in storms to produce additional precipitation or to reduce weather hazards. Inadvertent weather modification that, for example, might manifest as a reduction of air quality associated with human production of aerosol or as a change in climate due to accumulation of greenhouse gases associated with the burning of fossil fuels, is covered in other articles within this monograph.
The scientific era of weather modification began in 19461 when Vincent Schaefer of the General Electric Company introduced dry ice into a cloud chamber containing a supercooled liquid cloud, immediately causing the cloud to glaciate (Schaefer 1946). Within a month, his colleague Bernard Vonnegut discovered that silver iodide (AgI), a substance with a crystallographic structure closely resembling ice, also effectively glaciated supercooled clouds (Vonnegut 1947). These discoveries launched a scientific revolution in field experimentation, cloud physics, radar meteorology, and storm dynamics that continues to this day. Yet despite the enormous progress in these fields over the last seven decades, the effectiveness of cloud seeding still remains controversial within the scientific community. Major reviews of the scientific advances and evaluations of the state of the science over this historical period can be found in reports for the National Academy of Sciences (National Research Council 1964, 1966, 1973, 2003), the National Science Foundation (Special Commission on Weather Modification 1966), and the U.S. Congress (U.S. Congress 1978), in books (McDonald 1958; Hess 1974; Cotton 2009), and in present and past statements by AMS (AMS 2010) and the World Meteorological Organization (WMO; WMO 2010). Somewhat paradoxically, despite the scientific uncertainty, operational cloud seeding programs abound worldwide. A review by the World Meteorological Organization estimated that 56 countries had active weather modification operations in 2016 (Bruintjes 2016). In that same year in the United States, wintertime cloud seeding operations were carried out across the western mountains to increase snowpack reservoirs, hail-suppression operations occurred in North Dakota, and convective storms were seeded in Texas in an attempt to increase rainfall. Demand for, and advancements in, cloud seeding technology continue to be driven by water shortages in arid, populated regions of the world, and the need to mitigate damage from hail.
The scientific uncertainty associated with cloud seeding has roots in the evolution of technology as much as it does in science. Following the discoveries of Schaefer and Vonnegut, enormous optimism for the potential of cloud seeding led immediately to a number of projects in the United States and other countries in the 1940s and 1950s (e.g., Kraus and Squires 1947; Leopold and Halstead 1948; Squires and Smith 1949; Smith 1949), which later evolved toward fully randomized scientific experiments by the 1960s (e.g., Mielke et al. 1970, 1971; Gagin and Neumann 1974). However, the technologies required to conduct physical evaluations of natural cloud structure and the effects of seeding during these projects, such as research aircraft and radar systems, and computational capabilities to conduct complimentary modeling experiments, had yet to be developed. Scientific attempts to evaluate cloud seeding relied primarily on statistical analyses using target and control approaches, most of which were later shown to be flawed (e.g., Rangno 1979; Rangno and Hobbs 1980a,b, 1993, 1995). It was not until the mid-1970s that instruments such as optical array probes were invented and deployed on research aircraft, about the same time that radars were first utilized in cloud seeding experiments. Doppler and polarization radar technologies were only being explored. Modeling capabilities with the required resolution and microphysical sophistication to evaluate cloud seeding effects were not available until decades later.
Initial studies with these new technologies raised questions about the validity of the underlying hypotheses at about the same time that the older statistical studies were undergoing scrutiny. The negative overtones of the studies published in the late 1970s and early 1980s, summarized succinctly in an article in Science in 1982 (Kerr 1982), led U.S. government agencies to eliminate all funding for cloud seeding research by the early 1990s. In retrospect, many of the physical studies of that era could also be scrutinized based on advances in our understanding of aircraft probes (e.g., Jackson and McFarquhar 2014), and on more recent measurements from remote sensing technologies such as cloud radars, cloud lidars, and radiometers that were introduced in later decades (e.g., Geerts et al. 2015). Nevertheless, federal funding for cloud seeding research remained unavailable for nearly a quarter century, during which time significant advances were made in observing technologies and modeling capabilities driven in part, ironically, by inadvertent weather modification in the form of anthropogenic climate change.
Technological advances and their potential application to evaluating cloud seeding were recognized in a 2003 report to the National Academy of Sciences (National Research Council 2003, p. vii). That report stated that
despite significant advances in computational capabilities to deal with complex processes in the atmosphere and remarkable advances in observing technology, little of this collective power has been applied in any coherent way to weather modification. The potential for progress in weather modification as seen by this Committee is dependent upon an improved fundamental understanding of crucial cloud, precipitation, and larger-scale atmospheric processes. The Committee believes that such progress is now within reach should the above advances be applied in a sustained manner to answer fundamental outstanding questions.
Following this report, in 2012 the National Science Foundation resumed funding for evaluation of cloud seeding, focused on studies of orographic cloud systems over the mountains of Wyoming and Idaho. Although U.S. government interest in weather modification research lapsed during the two decades between 1991 and 2012, international research continued. For example, results from research on rainfall enhancement from convective storms were reported for South Africa and Thailand (Silverman 2001a, 2003), and studies of orographic clouds were reported for the Snowy Mountains of Australia (Manton and Warren 2011; Manton et al. 2011).
b. Scientific hypotheses and progress
Two methods, glaciogenic seeding (dispersing silver iodide aerosol from ground generators or aircraft, or dry ice from aircraft, into a cloud containing supercooled liquid water) and hygroscopic seeding (dispersing salt aerosol by aircraft into the base of a convective updraft), have been employed depending on the physical structure and underlying dynamics of the targeted clouds. Glaciogenic seeding has been used in winter orographic cloud systems in attempts to increase snowfall over target river basins, in cumulus congestus and cumulonimbus clouds to stimulate or increase rainfall, in supercooled fog to increase visibility near airports, and in severe thunderstorms to reduce hail size. Hygroscopic seeding has been used in convective clouds with warm cloud bases (primarily in the tropics) in attempts to stimulate or increase rainfall and to increase visibility in fog near airports. Several hypotheses concerning the chain of events following seeding with these agents have been employed that depend on the type of cloud treated with a seeding agent.
Evaluation of hypotheses concerning the effects of cloud seeding has followed two approaches: statistical assessment and physical evaluation. Statistical assessments to evaluate possible changes in precipitation (either measured with gauges or estimated from radar data) during randomized seeding experiments follow procedures developed specifically for weather modification experiments (e.g., Tukey et al. 1978; Crow et al. 1979; Braham 1979; WMO 1980; Gabriel 1981, 2000; see also Silverman 2001a, 2003). Physical evaluations involve measurements of natural cloud structure and the chain of microphysical events in clouds that follow the introduction of seeding material. A brief summary of progress toward verification of these hypotheses follows for different categories of cloud systems.
1) Increasing snowfall from orographic clouds
Seeding of orographic clouds has generally been viewed as the most promising technology for increasing water supplies because mountain snowpack represents a natural reservoir that is recharged in winter and supplies water to drainage basins in summer. A well-developed research effort has been carried out over many decades within orographic clouds. The term “orographic clouds” refers here to wintertime cloud systems over mountain ranges, regardless of whether the clouds are isolated or part of frontal systems associated with the passage of extratropical cyclones across a mountain range. The fundamental hypothesis underlying cloud seeding as a method to enhance precipitation from wintertime orographic cloud systems is as follows:
Orographic seeding hypothesis: A cloud’s natural precipitation efficiency can be enhanced by converting supercooled water to ice upstream and over a mountain range in such a manner that newly created ice particles, growing by diffusion, riming, and/or aggregation, can fall as additional snow on a specified target area.
This hypothesis relies on the physical principle that the equilibrium vapor pressure of water vapor with respect to ice is less than that with respect to liquid water at the same subfreezing temperature, such that a water-saturated cloud (relative humidity with respect to water RHw = 100%) will be supersaturated with respect to ice at a rate of about 1% per degree of supercooling (Fig. 22-1; Pruppacher and Klett 2010). As a result, ice particles in clouds containing supercooled water grow rapidly to precipitation-sized particles, while supercooled cloud droplets, which are small and nonprecipitating, evaporate to provide the moisture for the growth of ice. This process was first proposed by Alfred Wegener (Wegener 1911), and later explained theoretically and demonstrated in experiments by Tor Bergeron (Bergeron 1935) and Walter Findeisen (Findeisen 1938), well before the discoveries of Schaefer and Vonnegut regarding seeding. Past reviews with a partial or total focus on orographic weather modification research include those of Elliott (1986), Rangno (1986), Reynolds (1988), Orville (1996), Bruintjes (1999), Long (2001), Garstang et al. (2005), Huggins (2008), Tessendorf et al. (2015), and Reynolds (2015).
Physical evaluations of the orographic seeding hypothesis have followed four basic thrusts: 1) determining when and where supercooled liquid water (SLW) was present in clouds, 2) documenting natural precipitation processes and determining which clouds were suitable for treatment with silver iodide, 3) determining conditions under which plumes of ground-released AgI reached target areas, and 4) documenting the microphysical chain of events following seeding to determine whether it was consistent with the hypothesis and if it impacts the water supply. Studies of supercooled water over many mountain ranges have produced consistent results, showing that SLW is most often found in clouds: 1) along and over steep mountain slopes; 2) in embedded convection, when it exists; 3) in more laminar orographic clouds where cloud-top temperatures are greater than −15°C; 4) at cloud top, particularly for cloud tops warmer than approximately −25°C; 5) in turbulent eddies induced by local mountainous terrain; 6) in updrafts associated with mountain-induced gravity waves; and 7) in orographic clouds with bases below melting level and terrain-forced ascent of cloud water through the melting level (Hobbs 1975a; Reinking 1979; Hill and Woffinden 1980; Politovich and Vali 1983; Heggli et al. 1983; Rauber et al. 1986; Rauber and Grant 1986, 1987; Sassen et al. 1986, 1990; Marwitz 1987; Heggli and Rauber 1988; Lee 1988; Rauber and Tokay 1991; Rauber 1992; Demoz et al. 1993; Bruintjes et al. 1994; Long and Carter 1996; Reinking et al. 2000; Kusunoki et al. 2004, 2005; Ikeda et al. 2007; Geerts et al. 2011; Morrison et al. 2013).
Documenting natural precipitation processes to determine which clouds, if any, are suitable for treatment with AgI to enhance precipitation also involved major campaigns (e.g., Hobbs 1975a; Cooper and Saunders 1980; Cooper and Vali 1981; Marwitz 1987; Rauber 1987, 1992; Uttal et al. 1988; Sassen et al. 1990; Long and Carter 1996). Together, these studies showed that the microphysics of mountain cloud systems evolve in close relationship to their dynamical structure, which in turn is associated with the approach and passage of upper-tropospheric fronts and jet streams. Deep orographic cloud systems, which often occur prior to frontal passage, are typically characterized by ice nucleation near cloud top, followed by diffusional growth and aggregation of ice particles during fallout to the surface. Studies for the Cascades (Hobbs 1975a; Stoelinga et al. 2003), northern Colorado Rockies (Rauber 1987), San Juan Mountains of Colorado (Cooper and Saunders 1980), and Australia’s southern mountains (Long and Carter 1996) all support this basic microphysical evolution. Riming, when it occurs, is typically limited to areas near steeper mountain slopes and near the terrain where local supercooled water production can occur (e.g., Rauber 1987; Geerts et al. 2011).
Determining if plumes of ground-released AgI reach target areas, and of the dynamical processes that impact the spread of the plumes, has been addressed in several studies. Super (1974) found that the plumes from AgI ground releases in clear, stable conditions were confined to the lowest 500 m over the Bridger Range in Montana, while Holroyd et al. (1988) showed that ground-released plumes over Colorado’s Grand Mesa ascended upward at ~2 m s−1 in cloudy environments, again confined within 500 m of the terrain. More detailed modeling studies of plume dispersion by Bruintjes et al. (1995) showed plumes reaching no higher than 800 m above maximum terrain height when released from an upwind ridge. The emergence of a new state-of-the-art cloud seeding microphysics parameterization (Xue et al. 2013a,b), advances in microphysical schemes for numerical models (Thompson et al. 2008; Morrison and Grabowski 2008), together with sufficient computing power to resolve large eddies, have recently allowed calculation of plume trajectories and examination of the chain of events associated with glaciogenic seeding (Xue et al. 2013a,b, 2014, 2016; Chu et al. 2014, 2017a,b; Boe et al. 2014).
The final component of physical hypothesis evaluation has been to determine whether the microphysical chain of events following seeding is consistent with the hypothesis, and whether that chain of events results in additional precipitation on the ground. Three approaches have been employed: physical, statistical, and modeling. The primary method used to evaluate if seeding has an impact on the water mass of the snowpack has been with precipitation gauges using target/control statistics. As noted earlier, most statistical studies of the 1960s and 1970s have been rendered inconclusive by criticism of the methods and data handling. An exception is the Bridger Range experiment, where Super and Heimbach (1983) reported as much as a 15% increase in precipitation when the seeding plume temperature was lower than −9°C. More recent statistical analyses from randomized experiments in Australia (Morrison et al. 2009; Manton and Warren 2011) and from the Wyoming Weather Modification Pilot Project (WWMPP 2014) also suggest that winter orographic cloud seeding can result in increases in precipitation from seedable storms in the range of 5%–15%. However, Ritzman et al. (2015) estimate that only one-third of all winter storms over the mountains in southern Wyoming are seedable by ground generators, assuming the WWMPP criteria, so the overall impact on the winter snowpack is less. Studies by Silverman (2008, 2009) of the Kern River and Vail watersheds also yielded positive results.
Physical studies of cloud seeding of orographic clouds have, in limited cases, documented precipitation evolution from the point of aircraft seeding to the ground. Hobbs (1975b) presented three case studies of airborne seeding of stratocumulus and cumulus clouds over the Cascades. In each case, enhanced in-cloud ice particle concentrations, transitions in ice particle habits, increases in Ag concentrations in surface snow, and increases in snowfall were observed in space and time consistent with expectations based on ice particle plume trajectory calculations. Marwitz and Stewart (1981) and Prasad et al. (1989) reported similar aircraft observations in Sierra Nevada cumuli, as did Prasad et al. (1989), Deshler and Reynolds (1990), and Deshler et al. (1990) for Sierra Nevada shallow orographic clouds. The seeding was airborne in all these studies. Super and Boe (1988), Super and Heimbach (1988), and Huggins (2007) reported distinct signatures of ground-based AgI seeding in aircraft-measured ice particle concentration and/or in precipitation at the ground in their studies of stable orographic clouds over Colorado’s Grand Mesa, Montana’s Bridger Range, and Utah’s Wasatch Plateau. They observed, from aircraft, changes in cloud structure that were consistent with seeding and enhanced precipitation rates (several times as large as outside of the plumes but generally light, <1 mm h−1) at the ground, coincident with measured AgI plumes. Using an entirely different approach, Warburton et al. (1995) and Manton and Warren (2011) examined the ratio of silver versus a nonnucleating tracer aerosol (In2O3) in freshly fallen snow during ground-generator seeding events in the Sierras and Australia’s Snowy Range, respectively. The ratios of silver to indium showed clear evidence that AgI, acting as an ice nucleant, was selectively incorporated into ice crystals and deposited as snow on the mountains. Recent work by French et al. (2018) reported measurements from radars and aircraft-mounted cloud physics probes that together showed the initiation, growth, and fallout to the mountain surface of ice crystals resulting from glaciogenic seeding (Fig. 22-2).
2) Stimulating rain from cumulus congestus and cumulonimbus clouds
Two approaches, glaciogenic seeding and hygroscopic seeding, have been attempted for rain enhancement from cumulus congestus and cumulonimbus clouds. For rain enhancement from glaciogenic seeding, two hypotheses have been forwarded, the static-mode and the dynamic-mode hypotheses. These are summarized below.
Glaciogenic static-mode seeding hypothesis: Glaciogenic seeding a cumulus congestus/cumulonimbus with silver iodide or frozen CO2 in regions containing supercooled water activates the Wegener–Bergeron–Findeisen process, leading to rapid nucleation and growth of ice, which can then grow to precipitation and fall to the surface in a time frame shorter than the lifetime of the cloud.
Glaciogenic dynamic-mode seeding hypothesis: Seeding a cumulus congestus/cumulonimbus with silver iodide or frozen CO2 in regions containing supercooled water converts the water to ice, releasing the latent heat of fusion within the cloud, increasing cloud buoyancy and causing the cloud to grow taller. This in turn causes more moisture to condense, activates more ice near cloud top, and increases the amount of precipitation falling from the cloud.
A review of research related to the static- and dynamic-mode glaciogenic seeding hypotheses was published by Silverman (2001a), with critical commentary by Hobbs (2001) and Woodley and Rosenfeld (2002), and a reply by Silverman (2001b, 2002). Following publication of this review—and papers included in the review, but published at a later date (e.g., Woodley et al. 2003a,b)—only limited additional evaluation of the scientific hypotheses has been reported (Woodley and Rosenfeld 2004). Silverman (2001a) critically evaluated studies of static glaciogenic seeding of cumulus congestus and cumulonimbus during three Israeli experiments (Israel-1; Gagin and Neumann 1974; Israel-2; Gagin and Neumann 1981; Israel-3; Nirel and Rosenfeld 1994) the High Plains Cooperative Experiment 1 (HIPLEX-1; Smith et al. 1984), and experiments in Spain and Italy (List et al. 1999). Silverman (2001a) then performed a similar evaluation of experiments invoking the dynamic-mode glaciogenic seeding hypothesis including randomized experiments in the Caribbean (Simpson et al. 1967), south Florida (Woodley 1970; Simpson and Woodley 1971), the first and second Florida Area Cumulus Experiments (FACE 1 and 2; Woodley et al. 1982, 1983), the Texas experiment (Rosenfeld and Woodley 1989, 1993; Woodley and Rosenfeld 1996), the Cuba experiments (Koloskov et al. 1996), and the Thailand glaciogenic seeding experiment (Woodley et al. 2003a,b). From this review, Silverman (2001a, p. 919) stated that
based on a rigorous examination of the accumulated results of the numerous experimental tests of the static-mode and dynamic-mode seeding concepts conducted over the past four decades, it has been found that they have not yet provided either the statistical or physical evidence required to establish their scientific validity.
He noted that the scientific high-level reviews available at the time by the major advisory agencies such as the National Academy of Sciences remained valid. Given the limited additional research on glaciogenic seeding of cumulus congestus and cumulonimbus in the time since the Silverman (2001a) review, his conclusions can still be regarded today as the state of the science.
Two hypotheses, a static-mode hypothesis and a dynamic-mode hypothesis, have emerged for rain enhancement from hygroscopic seeding. These are summarized below.
Hygroscopic static-mode seeding hypothesis: Seeding a cumulus congestus/cumulonimbus below the melting level with large soluble nuclei triggers the collision–coalescence process early in the cloud lifetime, leading to rapid growth of raindrops, which then fall to the surface as precipitation in a time frame shorter than the lifetime of the cloud.
Hygroscopic dynamic-mode seeding hypothesis: Seeding a cumulus congestus/cumulonimbus with hygroscopic nuclei initiates rainfall through the collision–coalescence process. Once rain develops, evaporation below cloud base triggers the formation of cool-air outflows. Lifting along the outflow boundaries forms new convective cells, increasing areal coverage of convection and overall rainfall from the convective complex.
Silverman (2003) provides a comprehensive and critical summary of hygroscopic seeding of convective clouds for rainfall enhancement, specifically, the randomized experiments in South Africa (Mather et al. 1997a,b; Silverman 2000), Mexico (Bruintjes et al. 1999, 2001), Thailand (Silverman and Sukarnjanaset 2000), and India (Murty et al. 2000). His evaluation, which was accomplished in accordance with the original design of the experiments, found that statistically significant increases in precipitation were obtained in all but the Mexico experiments, even after accounting for the multiplicity of hypotheses and analyses associated with the experiments. In the case of Mexico, the statistical results were also positive, but the a priori hypotheses could not be confirmed as statistically significant. He noted that although all four experiments provided some evidence, with either modeling studies or observations, that hygroscopic seeding could accelerate the growth of raindrops through collision–coalescence, very few physical observations were taken as part of the randomized experiments. The need for physical measurements with modern instrumentation later motivated a hygroscopic seeding experiment in Queensland, Australia, in 2009 (Tessendorf et al. 2012) that included aircraft and radar investigations. Results emphasized the complexities of sorting out the many physical processes and variability in evolution of both natural and seeded clouds. Silverman (2003, p. 1226) concluded that the studies through that time had “not yet provided sufficient statistical or physical evidence to claim that hygroscopic seeding of convective clouds to increase precipitation is scientifically proven.” Nevertheless, Silverman (2003) provided a very optimistic view of the potential of hygroscopic seeding.
A second, unanticipated effect that emerged from the statistical analyses was a potential dynamic effect that led to the development of the dynamic-mode seeding hypothesis. Exploratory statistical analyses in some experiments suggested enhancements of rainfall outside the area of seeding. These were thought to be attributable to seeding-enhanced downdrafts creating longer-lived clouds along their peripheries. At this time, this process must be considered in the realm of speculation, and it requires far more supporting physical and model evidence to gain credibility.
3) Hail suppression
Four distinct hypotheses underlie seeding for hail suppression:
Beneficial competition hypothesis: Seeding significantly increases the ice embryo concentration in the hail-growth region of a severe thunderstorm so that the artificial and natural ice particles compete for available supercooled water. The supercooled water is redistributed among all ice embryos resulting in smaller hailstones. Falling to the ground, they melt to rain or small, less damaging hail.
Early rainout hypothesis: Seeding creates ice particles that accelerate precipitation development within updrafts forming early in the storm, depleting supercooled water and causing rainout from what otherwise would be rain-free cloud bases.
Trajectory altering hypothesis: Seeding produces particles with initially greater mass, altering trajectories such that the particles do not benefit from exposure to high supercooled water content in stronger updrafts in the mature stages of the storm.
Dynamic hypothesis: Seeding supercooled cumulus congestus/cumulonimbus within a storm’s flanking line releases latent heat and increases the buoyancy of the turrets, diverting energy from the main storm so that its updrafts and supercooled water might be reduced.
The initial idea of beneficial competition followed from work in the former Soviet Union that claimed hail reduction of 70%–90% [see the reviews in Marwitz (1973) and Battan (1973, 1977)]. The Soviet claims sparked interest in hail suppression in the United States in the late 1960s, culminating in the National Hail Research Experiment (NHRE), a 3-yr, randomized hail-suppression study (see Foote and Knight 1979 and references therein). NHRE also included extensive physical studies of hail-producing thunderstorms, leading to new insights on the dynamical, microphysical, and thermodynamical evolution of these storms (see review of Knight and Knight 2001). Statistical tests were performed on the databased on surface networks of hail-measuring instruments using hail mass as the primary response variable. The project identified possible seeding effects ranging from a reduction of 60% to an increase of 500% in total hail mass, depending on the test used, within 90% confidence limits. They determined that no conclusion could be drawn from NHRE about seeding effects for hail reduction. NHRE was planned for five seasons (1972–76), but was halted after three seasons (1972–74) for lack for results and other reasons described by Foote and Knight (1979).
An outcome of NHRE was a fundamental change in the way that scientists considered the problem of hail suppression. The research showed that, at least on the High Plains, and likely in many regions of the world, rain forms through the graupel/riming process, not through coalescence. This led to the immediate conclusion that the Russian ideas regarding hail growth and the method for its suppression, which had been the motivation for NHRE, were wrong. It also became very clear that the precipitation efficiency of storms is low, so that an only modest increase in the number of hail embryos, even a doubling or tripling, would more likely lead to an increase in hail, rather than a decrease, since the liquid water in a natural storm is not even close to being depleted. There was also attention raised about suppressing hail in a supercell storm, since the presence of the bounded weak-echo region (BWER) collocated with the updraft meant that hail growing on the edge of the BWER always encountered undepleted supercooled liquid water (Browning and Foote 1976).
A second major 5-yr (1977–81) randomized field campaign, Grossversuch IV, was carried out in Switzerland, which tested Soviet hail prevention methods using identical procedures and technology applied in the earlier Soviet work (Federer et al. 1986). As in NHRE, the main result of confirmatory and exploratory analyses emerging from this experiment was that no statistically significant difference between seeded and unseeded hail cells could be found. Additional studies have been reported from operational programs in South Africa (Mather et al. 1997a,b); Alberta, Canada (Gilbert et al. 2016); Yugoslavia (Mesinger and Mesinger 1992); and Bulgaria (Simeonov 1996). Although positive results have been reported, these studies do not provide sufficient statistical and physical evidence to claim that hail suppression by seeding is a scientifically proven technology.
4) Fog dispersal
Fog clearing near airports for the purpose of increasing visibility for aircraft attempting to land on runways has been pursued for many years and is an operationally proven technology (Kocivar 1973). Five approaches have been studied and shown to be effective under different meteorological conditions: 1) heating warm fogs from ground level (Appleman and Coons 1970); 2) seeding of supercooled fog, typically with dry ice pellets dropped in from above the fog (Beckwith 1965); 3) spraying liquid propane or compressed air to cause ice nucleation in supercooled fogs (Vardiman et al. 1971; Hicks and Vali 1973; Weinstein and Hicks 1976); 4) hygroscopic seeding of warm clouds with the seeding agent, normally hygroscopic salts, dropped in from above the cloud top (Kunkel and Silverman 1970; Reuge et al. 2017); and 5) using helicopter downwash to mix the fog with dry air from aloft (Plank et al. 1971).
c. Current practice and a look to the future
At present, operational cloud seeding continues to be applied worldwide as a method to increase precipitation in arid regions and to reduce hazards related to hail and fog. However, research to understand the effects of cloud seeding is limited to a few projects across the globe. Orographic cloud seeding research, coupled with operational programs, is now ongoing in the United States (Geerts et al. 2015; French et al. 2018; Tessendorf et al. 2018). Limited research also is continuing with operational programs in locations worldwide [e.g., the United Arab Emirates (UAE 2018)]. These projects are pioneering what is likely to be the most productive path for research in the future: scientific research programs carried out in cooperation with operational programs. The great need for water and clean hydropower in the future demands that we explore this, and other avenues for advancing our understanding of the benefits of cloud seeding.
3. Aviation: A catalyst for progress in weather observation and forecasting
A second topic treated here is aviation meteorology. Since the Wright brothers launched the first documented successful flight, aviation has required meteorological support. This section provides a brief history of that development and describes how these needs have in turn necessitated progress in meteorology research. This discussion emphasizes the history of progression in the United States, which led the way for aviation weather for much of the early history of these advances.
a. The Wright Brothers through World War I
In 1903, the Wright brothers made the first successful flight in a powered, fixed-wing aircraft on a sandy beach at Kitty Hawk, North Carolina. The site was chosen because of their early recognition of the criticality of weather to their endeavor, which caused them to write a letter to the Weather Bureau office in Kitty Hawk three years earlier asking for climatic information on expected weather conditions. They received a prompt response from Joseph Dosher explaining that they could expect about 60 miles of milewide beach with prevailing winds from the north-northeast in September and October. This would put the wind nearly straight down the beach, providing many miles of consistent wind. This early forecast helped them make the decision to use Kitty Hawk for their flight testing (Kitty Hawk Weather Bureau Office 1900). On the day of their first successful powered flights, after the fourth flight a sudden gust of wind overturned their plane, ending the day’s flights! So weather accidents started on day one of powered flight (Hughes 2012).
By 1911, air routes had expanded from the Wright’s flights of a few hundred feet to the first transcontinental trip from New York to California. With this progress came an interest in weather information for flight decision-making. Fortunately, the invention of the telegraph had been largely responsible for advancement of meteorology in the nineteenth century. With the advent of the telegraph, weather observations from distant points could be collected, plotted, and analyzed at one location. An 1870 congressional resolution required the secretary of war to provide for taking meteorological observations and giving notice of the approach of storms (see also section 4). So a new national weather service was born within the U.S. Army Signal Service. In 1890, this responsibility was transferred to a newly created U.S. Weather Bureau in the Department of Agriculture. This paved the way in 1914 for the U.S. Weather Bureau to establish an aerological section to provide forecasts to meet the needs of aviation (NWS 2017).
Aviators continued to be interested in receiving information about the weather as part of their planning process. In 1894, kites had been used to loft a self-recording thermometer, making the first observations of temperature aloft. In 1909, the Weather Bureau started a program of free-rising balloon observations (NWS 2017). During World War I (WWI) airplanes actually became a critical tool for providing battlefield observations. Long-range artillery required detailed atmospheric information along the trajectory of the shell to allow accurate placement of a shot (see also section 4). Aircraft were equipped with recorders and sensors; they were flown 2 times per day up to about 14 000 ft (~4300 m) collecting sounding information to be used by gunners. Of course, meteorologists were quick to recognize the potential utility of these observations (Met Office 2017).
b. The end of World War I to the beginning of World War II
Following the end of WWI, interest in aviation spiked. In 1918, the U.S. Weather Bureau began issuing aviation forecasts for domestic military flights and for new airmail routes. The first of these routes was from New York City, New York, to Philadelphia, Pennsylvania, to Washington, D.C. Regularly scheduled flights in all kinds of weather presented new problems, but gradually a reliable performance over the route was achieved. In 1920, airmail routes were opened from New York to San Francisco, California. Radio stations were installed at each airfield along those routes. All plane movements were based on weather reports received via radio (Wright 1999). Meanwhile, in 1919 the first transatlantic flight was completed by a U.S. Navy seaplane, with two intermediate stops (NWS 2017).
Lighted airways were implemented in 1926, along with faster and more capable airplanes, allowing night flight over entire routes. That same year, Congress passed the Air Commerce Act, directing the Weather Bureau to furnish weather reports, forecasts, and warnings to promote the safety and efficiency of air navigation in the United States. Weather hazards to flights included structural icing, snow, fog and low clouds, winds, turbulence, and thunderstorms. The Weather Bureau did its best with its limited resources and capabilities to address each of these aviation weather problems. On the basis of the Air Commerce Act, the Weather Bureau established weather offices at major airports. These offices were equipped with teletype systems providing both observation and forecast information. Pilots received their weather briefings by personally visiting the weather office and interacting with forecasters (Cartwright and Sprinkle 1996).
It was in this environment that Charles Lindbergh, in 1927, flew alone, nonstop from Long Island In New York to Paris, France. He consulted with the Weather Bureau in planning the flight, but did not wait for the final forecast for weather over the Atlantic Ocean before departing. Forecasters would have recommended a 12-h delay because of fog and rain, both of which did cause problems for Lindbergh (NWS 2017).
In the 1930s, the transport of passengers became a common enterprise, typically in aircraft that carried only 10–15 people. Turbulence was a significant hazard because the smaller aircraft with lighter wing loading flew at relatively low altitudes “in the weather.” Structural icing was also a major hazard, often mitigated with inflatable boots and deicing fluids. Thunderstorm hazards were generally mitigated by maintaining a wide berth from them on the planned routes (Cartwright and Sprinkle 1996).
In 1936, Pan American Airways used flying boats to pioneer air routes across the Pacific Ocean from San Francisco to Honolulu (Hawaii), Guam, the Philippines, and southeastern China. As part of this effort, they hired their own meteorological department, set up their own observational network, and even built their own observing stations and radiosonde operations (Cartwright and Sprinkle 1996).
To ensure a federal focus on aviation safety, President Franklin D. Roosevelt signed the Civil Aeronautics Act in 1938. The legislation established the independent Civil Aeronautics Authority (CAA), with a three-member Air Safety Board that would investigate accidents and recommend ways of preventing them. The legislation also expanded the government’s role in civil aviation by giving CAA power to regulate airline fares and determine the routes that individual carriers served, as well as to provide air traffic control services (FAA 2017a).
By 1938, the threat of war in Europe motivated the U.S. military to begin a significant expansion of their weather services. Along with significant scientific advancement in weather forecasting techniques, radiosondes were rapidly being introduced in the United States and Europe, replacing the limited airplane soundings. This availability of routine upper-air information was responsible for a breakthrough in forecasting skill, including flight at higher altitudes. By 1939, transatlantic passenger flights were beginning, initially using flying boats, with many to/from LaGuardia Airport in New York. This activity brought a whole new dimension of responsibility to the Weather Bureau, which in 1940 was transferred to the Department of Commerce (NWS 2017; Cartwright and Sprinkle 1996).
c. World War II
By 1941, hundreds of flights a week of land planes were being made across the North Atlantic. This highlighted the need for more upper-air data in the North Atlantic. Eventually 22 stations were put in place with upper-air monitoring to satisfy this need (Cartwright and Sprinkle 1996). There was a lack of information on upper-air winds not only over the North Atlantic but also over Europe. In fact, there was a lack of understanding of the jet stream. In 1944, an Allied bomber stream headed to Berlin expecting 45 mi h−1 (~20 m s−1) winds encountered 120 mi h−1 (~54 m s−1) winds, resulting in the loss of 72 aircraft, many of them when they followed wind-blown marker flares and mistakenly altered course to fly over higher concentrations of enemy antiaircraft batteries (WW2Talk 2016).
During the war, much weather information was classified. One of the triumphs of Allied forecasting was the fact that Enigma codes had been broken and Allied forecasters routinely intercepted observation and forecast information from the Germans (see also section 4). This greatly facilitated the production of quality forecasts over Europe. In addition, meteorological reconnaissance aircraft were launched ahead of major bombing missions to provide specific information over the route of flight (de Cogan 2012).
Over the eastern Atlantic, ships were required to maintain radio silence to avoid German U-boat attacks. To mitigate this, two ships were armed and deployed specifically to take meteorological observations. Both were sunk after a half dozen voyages. After this, nine meteorological reconnaissance flights a day reported conditions over the eastern Atlantic and the North Sea. In addition, radar operators discovered that noise in their returns correlated with precipitation over the ocean and enemy territory. Also, weather balloons with attached radar reflectors were used to track upper-level wind speed and direction (de Cogan 2012).
In the same time frame as the war, the U.S. domestic airway weather support structure was well in place. A dozen aviation forecast centers were operating, interconnected with teletype circuits distributing all aviation weather information. The codes developed for these circuits would continue to be used, even to the present day. The CAA established 20 Air Route Traffic Control Centers (ARTCCs) to follow instrument-flight-rules (IFR) flights on critical routes. An agreement was reached between the CAA and the Weather Bureau to provide a small forecasting unit in each of these Centers to alert controllers of weather events that could compromise safety or disrupt flows (Cartwright and Sprinkle 1996).
In the same timeframe as the war, the aircraft jet engine was developed, starting in 1937 with Frank Whittle operating the first engine in the laboratory. By 1942, a Whittle jet engine made by General Electric was flown on the first U.S. jet airplane, the Bell XP-59A. This led to the development of the jet fighter airplane and eventually to the first jet transport airplane. This new type of airplane would place new and different demands on observing, diagnosing, and forecasting aviation weather (Merkt 2017).
Even before the end of World War II (WWII), the Allied governments began planning for postwar aviation. The community convened the International Aviation Conference in Chicago in November 1944 with 52 nations in attendance. Out of this “Chicago Convention” came the framework for international aviation in the postwar period, including the establishment of the International Civil Aviation Organization (ICAO). The following year the International Air Transport Association was established to work hand in hand with ICAO. The third leg of the stool for international aviation was the WMO, established in 1950 (Shun et al. 2009; Cartwright and Sprinkle 1996). One of the purposes for establishing this United Nations organization was to further the application of meteorology to aviation. WMO seeks to ensure worldwide, reliable provision of high-quality, timely, and cost-effective meteorological service to aviation users.
d. Post–World War II through the mid-1970s
One of the most notable advances for aviation weather in the early postwar period was the recognition of the potential for radar to impact weather forecasting. The U.S. Navy gave the Weather Bureau 25 surplus aircraft radars to be modified for ground meteorological use. In 1947, the first military surplus aircraft surveillance radar was installed in Washington, D.C., and forecasters started experimenting with the utility of the technology (NWS 2017).
In the early 1950s, the CAA set up an air traffic control system that provided pilots with route guidance, separation from other aircraft, and weather information along their route of flight. Navigation was based on radio beacons defining the centerline of fixed air routes. As pilots followed these beacons, they were also able to listen to weather information relevant to their route. Airway communication stations were also placed along these routes. These stations provided voice communications to the cockpit. The operators relayed messages from the air traffic control facilities and the airline company and weather information needed by the pilots. These stations had teletype circuits connecting all the reporting stations and providing other coded weather information needed by the pilots (Richards 2009).
In 1954, the Weather Bureau, U.S. Navy, U.S. Air Force, Massachusetts Institute of Technology (MIT), and the University of Chicago formed the Joint Numerical Weather Prediction Unit, which produced regularly scheduled operational forecasts. By the late 1950s, the models were being run 2 times per day (NWS 2017).
On 21 May 1958, Senator A. S. “Mike” Monroney introduced a bill to create an independent Federal Aviation Agency (FAA) to provide for the safe and efficient use of national airspace. Two months later, on 23 August 1958, the President signed the Federal Aviation Act, which transferred the Civil Aeronautics Authority’s functions to a new independent Federal Aviation Agency responsible for civil aviation safety (FAA 2017a).
Also in 1958, National Airlines flew the first commercial jet flight on a route from New York City to Miami, Florida. This marked a major shift in how airlines would deal with weather and what weather is most important for their operations (NWS 2017).
Air transport grew rapidly until 1973. Much of this growth came from technical innovation including jet aircraft in the late 1950s, wide-body jets in the late 1960s, airline deregulation in the 1970s, and fuel-efficient twin jets with Extended-Range Twin-Engine Operations (ETOPS) certification. This growth dictated a strong demand for improved weather information, and the government began to develop and leverage new technology to meet the growing need.
In 1959, the Weather Bureau’s first WSR-57 weather surveillance radar was commissioned at the Hurricane Forecast Center in Miami. The next year, 1960, the first two weather satellites were launched, the polar-orbiting TIROS-I and TIROS-II (NWS 2017).
In 1961, a special training program was launched to equip FAA employees to provide flight weather briefings to pilots. That same year, the first official forecast of clear-air turbulence was issued by the Air Weather Service (NWS 2017).
In 1973, the NWS purchased its second-generation weather radar, the WSR-74. This was followed in 1976 by the use of Doppler radar to issue real-time operational forecasts and warnings. This success spawned the later launch of the third-generation WSR-88D program (NWS 2017).
In 1975, the first Geostationary Operational Environmental Satellite (GOES) was launched. By 1977, the success of weather satellites resulted in decommissioning the final U.S. observational ship (NWS 2017).
e. The “watershed” years for aviation weather improvements
The late 1970s began a watershed period of progress in aviation weather. After a crash killed 113 people at Kennedy International Airport in 1975, Professor Ted Fujita from the University of Chicago was called into the discussion. Living in Japan during WWII, he had actually mapped out the damage from the atomic bomb blast at Hiroshima. He had noted patterns on the ground along thunderstorm paths that looked like these bomb blast patterns. NCAR scientists offered to partner with Fujita using two new Doppler radars. In 1978 a field program was held in northern Illinois to search for “microbursts.” Over the summer they detected over 50 of these events. In the summer of 1982, a more extensive field program, the Joint Airport Weather Study (JAWS), was conducted at Stapleton Airport near Denver, Colorado. After a microburst-related crash in New Orleans, Lousiana, killed 153 people, the FAA threw its support behind that project. In 1984, the Classify, Locate, and Avoid Wind Shear (CLAWS) project was held in Denver. This project brought researchers closer to a wind shear warning system (UCAR 2017a).
The first mitigation put in place was developed by NCAR and the Boeing Company. The Wind Shear Training Aid was shared with the airlines to train their pilots in wind shear avoidance and escape. An anemometer-based system was developed called the Low-Level Wind Shear Alert System (LLWAS), consisting of a network of sensors deployed around the airport to detect these events and alert controllers. Also, a radar-based approach was developed between NCAR and the MIT Lincoln Laboratory, using Doppler radar to detect and alert controllers of events near the airport (Fig. 22-3). Following the successful deployment of these terminal Doppler weather radars (TDWRs), wind shear accidents became a thing of the past in the United States (UCAR 2017a; Stith et al. 2019).
Critical to aviation weather, the NWS embarked upon a modernization program in 1989. This included replacing human weather observers with the Automated Surface Observing System (ASOS); implementing the Next Generation Weather Radar (NEXRAD), a national network of WSR-88D Doppler radars mentioned earlier; deploying a new series of geostationary satellites; commissioning a 10fold increase in computing capacity; and planning the Advanced Weather Interactive Processing System (AWIPS), an all-purpose tool for field forecasters (NWS 2017). This modernization plan was developed in response to the need for improved weather information from many sectors of the economy, including aviation.
The U.S. NWS modernization was executed during the years from 1990 to 2000. The National Meteorological Center installed the Cray Y-MP8 computer. A contract was signed for production of 165 NEXRAD radars, with the 100th radar installed in 1993. A contract to deploy ASOS was issued in 1991. By 1992, 22 of the 115 modernized weather forecast offices were completed, along with 151 of the 1700 ASOS sites. In 1993 a contract was awarded to build AWIPS. By 1997, the NEXRAD network was fully deployed. In 2000 the AWIPS was installed at all 152 sites, formally ending the modernization effort (NWS 2017) and greatly impacting the operation of the U.S. aviation system.
f. A concerted effort for broad improvements in aviation weather safety
Beginning in the early 1990s, building on the success of the wind shear program and the incremental modernization program at NWS, the FAA embarked on the broadly based Aviation Weather Research Program (AWRP). Initially the program focused on the following areas: convective weather, inflight aircraft icing, and ground deicing of aircraft. In support of these efforts, the program included projects to improve radar algorithms, improve numerical weather prediction models, and develop sound methods of assessing the meteorological quality of products (Kulesa et al. 2003).
In addition, the agency began examining the possibility of a whole new class of radar that could serve both weather and aircraft surveillance needs. This program, called the Terminal Area Surveillance System, was envisioned to be an electronically scanned system that would replace all other radar in the terminal area. The program was too far ahead of the state of the technology, so was shelved to await newer, less costly technology (Mahoney et al. 1995). However, in a separate effort, the Office of Naval Research (ONR) did continue to pursue dual-use phased array surveillance and tracking radars for the U.S. Navy. The result was the Tactical Environmental Processor (TEP) and the Hazardous Weather Detection and Display Capability (HWDDC), which also aided in removal of weather clutter from tactical coverage (Maese et al. 2001, 2007).
In convective weather, early efforts concentrated on nowcasting techniques, that is, very short term (1 h) prediction of the movement, growth, decay, and initiation of convection. On the national scale, a product was developed, tested, and deployed to the National Weather Service called the National Convective Weather Forecast (NCWF). In the terminal area a comparable but finer-scale product called the Terminal Convective Weather Forecast was developed. Many of its features were deployed by the FAA in the Integrated Terminal Weather System (ITWS) (Kulesa et al. 2003; MIT/Lincoln Laboratory 2017c).
For in-flight icing, the approach taken to both diagnosis and prediction was based on fusing data from a number of indicators including models, radar, surface observations, soundings, aircraft observations, pilot reports, and satellites. An icing product was developed on a four-dimensional grid, allowing users to build a flight path through time and space to depict the icing hazard associated within each grid point. This capability, the “current icing product” (CIP) and “forecast icing product” (FIP), was deployed to the NWS for operational distribution and use. It continues to be incrementally improved as better sensor data, models, and heuristics become available (Kulesa et al. 2003; Bernstein et al. 2005).
For ground deicing of aircraft, it was noted that a number of accidents occurred because of the use of visibility as a surrogate for snowfall rate. The visibility–snowfall relationship can have serious errors under certain conditions. Thus the program developed effective methods for real-time measurement of the liquid water equivalent content of the snow. This parameter corresponds to the speed of dilution of anti-icing solution used on the aircraft. Weather Support for Deicing Decision-Making (WSDDM) was developed, tested with the airlines, and made available through commercial technology transfer (Kulesa et al. 2003; Rasmussen et al. 2001).
In the mid-1990s, AWRP built on its success by adding several other research areas. This research targeted various types of turbulence, along with ceiling and visibility, weather on oceanic flights, and the Aviation Digital Data Service, which was developed and deployed (Kulesa et al. 2003).
Turbulence research proceeded on two parallel fronts. The first studied turbulence produced around airports located near hills and mountains. This pilot project was initially launched at the new Hong Kong Airport and then at the Juneau, Alaska, airport. The challenge is to provide a real-time turbulence alert for an aircraft when the level of turbulence is too strong for safe operation. An operational system for Hong Kong was transferred to the Royal Observatory for use and support (UCAR 2017b). In the case of Juneau, the developed system was transferred to the FAA for operations and maintenance (Politovich et al. 2011).
Enroute turbulence warning initially involved forecasting clear-air turbulence. A model-based approach initially was developed called the Integrated Turbulence Forecast Algorithm (ITFA) and later renamed Graphical Turbulence Guidance (GTG). This technique depends upon an ensemble of diagnostics applied to model output, with a weighted combination of those diagnostics used to maximize the skill of the final product. As with the icing forecast products CIP and FIP described above, this result is presented on a four-dimensional grid, allowing a selection of forecast along any arbitrary flight path. GTG was subsequently deployed by NWS for gridded distribution and forecaster use (Kulesa et al. 2003; Sharman et al. 2006).
Ceiling and visibility was another area of early interest to AWRP. Two major efforts were undertaken. At San Francisco Airport (SFO), when marine stratus clouds impeded a pilot’s ability to visually monitor aircraft on the adjacent parallel approach, parallel approaches were suspended, resulting in a 50% reduction in landing capacity. The forecasting challenge was to predict the time of stratus burn-off, thus allowing parallel approaches. This problem was solved with a set of observational capabilities and a statistical, heuristic algorithm to make the prediction. The key new observations required to support the forecast system included the height of the marine inversion base (stratus cloud top), which is observed using two sonic detection and ranging instruments (sodars), ceilometers used to measure cloud base, pyranometers used to measure the incoming solar radiation, and high-resolution observations of surface temperature, dewpoint, and wind and their fluxes used to run a one-dimensional cloud model (Reynolds et al. 2011). It was transferred to the NWS for operation and use (Kulesa et al. 2003). The impact of this system was more efficient utilization of the limited number of landing slots by allowing aircraft to depart toward SFO well before clearing occurred, arriving just in time to use the dual approaches (Reynolds et al. 2012).
The other ceiling and visibility project of the AWRP during this period was the development of a tool for general aviation that would diagnose ceiling and visibility on a national basis. This graphically displayed product provided pilots an easier way to visualize the current conditions along their planned route. Since the product was diagnostic, it only showed the pilot what was happening right now, with no forecast component (Kulesa et al. 2003; Herzegh et al. 2015). This made the product less useful as a planning tool, since the conditions were likely to change by the time the pilot arrived at the location.
The AWRP oceanic work during this period centered on exploration of methods to provide in-flight aircraft a real-time depiction of cloud-top height along their route of flight. By being able to see beyond their onboard radar, they could anticipate the need for possible diversion several hundred miles earlier, providing safer and more efficient operations. A satellite-based cloud-top product was developed and made available via the Aeronautical Radio, Inc. (ARINC) paper strip printer in the cockpit. Trials were conducted with United Airlines on their routes to Australia. Although pilots supported the product, the FAA decided not to proceed with deployment because of budget constraints (Kulesa et al. 2003; Lindholm et al. 2013).
The other breakthrough AWRP activity during this period was the development and deployment of the Aviation Digital Data Service (ADDS). The Internet was becoming pervasive. The notion of a user-friendly Internet service providing the best the aviation weather community had to offer was funded by the FAA, developed by NCAR, and implemented by the NWS at the Aviation Weather Center in Kansas City, Missouri. The website grew rapidly to become one of the most referenced sites for aviation users. It also provided a mechanism to try new ideas with users and expose users to the best new research coming from the weather community (Kulesa et al. 2003). ADDS continues to be improved, and serves as a major component of the NWS system for delivery of aviation weather information.
Beginning in 1998, the Aviation Weather Center began providing meteorological documentation to international flights for use in dispatch, preflight, and in-flight. This documentation includes winds/temperatures aloft, significant weather charts, terminal aerodrome forecasts (TAFs), significant meteorological information (SIGMETs), meteorological terminal aviation routine weather reports (METARs), and aircraft reports (AIREPs). These data are obtained from the World Area Forecast System (WAFS) (Aviation Weather Center 2017).
Beginning around 2002 with a company called XM Weather, the dream of having reliable, timely weather information in the cockpit started to become a reality. With ADS-B In (cockpit uplink capability) and inexpensive computer tablet applications, weather information has become available to everyone. For the airlines, the arrival of broadband communications to provide passenger entertainment has made it easy for the pilot to access high-quality weather information on their electronic flight bag (usually a tablet computer) using the aircraft wireless. Oceanic cloud-top height products and turbulence diagnosis products are among the most popular applications appearing on these new systems (Zimmerman 2017; Kessinger et al. 2017).
g. Aviation weather research focus shifts to delays, capacity, and efficiency
In the early 2000s, aircraft delays seemed to be getting worse each year. The system was operating on the edge and was particularly vulnerable to disruptive weather in high-traffic corridors. A vision was formed for a Next Generation Air Transportation System (NextGen), which was officially included as part of the December 2003 FAA reauthorization act (Vision 100 2017). It was intended to be a multiagency, 25-yr effort to modernize the U.S. system. In late 2003, a Joint Planning and Development Office (JPDO) was established, including a Weather Integrated Product Team (WIPT). This team consisted of members from government agencies, contractors, academics, researchers, and industry (Vision 100 2017).
The weather vision for NextGen was for information that was accurate, timely, and relevant that could be ingested directly into automated decision-support tools. Over time, humans would be required to make fewer decisions as the automation became more weather aware. Humans would spend more time monitoring via status displays and thus being over the decision loop rather than in the decision loop. The system weather functions would be driven from a network-enabled, four-dimensional database (Fig. 22-4) shared by all system stakeholders (Weather Integrated Product Team 2006).
Since this vision was adopted, the FAA and NWS have been engaged in modernizing their respective systems to best achieve the NextGen weather vision. From 2009 to 2012, the NWS continued to incrementally and periodically upgrade NWS capabilities. The supercomputer system was upgraded in 2009 and GOES-15 was launched in 2010. In 2011 the NWS began upgrading the network of Doppler radars with dual polarization, which provides improved rainfall estimates, improved hail detection, and improved diagnosis of structural icing on airframes (NWS 2017). They also implemented the FAA developed NEXRAD Turbulence Detection Algorithm (NTDA) to provide an in-cloud spectral-width turbulence detection product (UCAR 2017c; Williams and Meymaris 2016).
Meanwhile, the FAA is moving forward with large acquisition programs to support the weather vision. One of the most critical is the development and deployment of Common Support Services–Weather (CSS-Wx), a capability prototyped in FAA research and development for risk reduction, before being contracted to Harris Corporation. This program provides the infrastructure services to facilitate network-enabled aviation weather for NextGen stakeholders. A second program, the NextGen Weather Processor provides a vehicle to transform the weather data into products with utility to specific stakeholders, including air traffic control, dispatchers, and pilots, and disseminates those products through CSS-Wx. This capability was also prototyped by FAA research and development for risk reduction before being contracted to Raytheon, Inc., to develop and deploy (FAA 2017b). A 0–8-h convective weather forecast product called Consolidated Storm Prediction for Aviation (CoSPA), which is based upon blending output from the High-Resolution Rapid Refresh (HRRR) Model and the Corridor Integrated Weather System (CIWS), is included in the NextGen Weather Processor (MIT/Lincoln Laboratory 2017a,b).
h. There is still work to do in aviation weather safety
With the continued improvement in sensors, models, and scientific understanding, the weather research community continues to make advancement in product skill in aircraft structural icing (UCAR 2017d). This is, in part, driven by the need for new icing products to address the new FAA certification standards referred to as appendix O, related to supercooled large droplet icing. The availability of high-quality microphysics in models provides a mechanism for significant improvement in icing forecasts, and the deployment of dual-polarization on NEXRAD offers another powerful tool to diagnose areas of potential icing in real time.
As with structural icing, research in forecasting turbulence has continued. A number of improvements have been made to the operational GTG, including coverage from the surface to 45 000 ft (~14 000 m), mountain-wave forecasting, and the use of energy dissipation rate as a reporting parameter. A global version of the forecast has been developed and deployed. A nowcasting version of GTG (GTGN) is currently being evaluated for deployment. This product, updated on a 15-min cycle, will be a useful addition to cockpit information displays (Sharman and Lane 2016).
A new area of aviation weather research has emerged, in part because of the introduction of higher-efficiency jet engines. Many of these engines seem to be highly susceptible to rollback of power and possible flameout when a large ingestion of concentrated ice crystals occurs. With many instances of this worldwide, the international community has set out to understand and mitigate this safety risk. This effort, called High Ice Water Content (HIWC) or High Altitude Ice Crystal (HAIC), has carried out several international field programs to develop a better understanding of the atmospheric conditions causing this engine event. This same atmospheric phenomenon can also be responsible for blocking inlets to sensor probes, causing erratic behavior of automated systems linked to the probes, for example, the loss of Air France 447 over the Atlantic. The potential impacts of this phenomenon were noted as early as 1998 by Paul Lawson (Lawson 1998). A prototype diagnostic algorithm has been developed and is undergoing evaluation (Haggerty and Black 2014). This algorithm makes use of model and satellite data to nowcast areas of potential hazard. Although there is no direct use of aircraft data, the models do assimilate aircraft data to improve their performance.
Spurred by several runway excursions related to slick pavement, the research community has launched programs to assist airport operators in better assessing runway treatment and for aircraft operators to more consistently report braking action. A program originally funded by the U.S. Department of Transportation to help state highway departments in decision-making for winter road maintenance was adapted for airport use. This program, called the Maintenance Decision Support System (MDSS) is being successfully used at airports to manage the treatment of runways and taxiways. The FAA implemented Takeoff and Landing Performance Assessment (TALPA) procedures and the use of the Runway Condition Assessment Matrix (RCAM) to better assess and report runway conditions (Steiner et al. 2015).
i. The wide world of aviation weather related to unmanned vehicles
One of the newest areas of aviation weather research relates to the flight of “unmanned” (remote controlled) aerial vehicles/systems (UAV/UAS or drones). The class of drone that is experiencing a surge of growth now is the small aircraft, either fixed wing or rotorcraft, flying below 400 ft (~120 m) AGL, within visual line of sight of the pilot. These aircraft are very susceptible to wind and turbulence. Because the aircraft are often equipped with very capable autopilots, they are able to compensate for heavy turbulence but at a cost of high power utilization, reducing the effective range to potentially unacceptable levels. Some of the potential applications for these vehicles may include urban environments with obstructions that cause highly variable wind fields over very short distances. For these applications to succeed, methods must be designed to provide these applications with high-resolution wind diagnoses and forecasts.
j. Summary of applications in aviation and a look to the future
Aviation weather research has carried the aviation industry to levels of safety that were not imaginable 50 years ago. Yet new weather threats continue to materialize as technology progresses. Space missions and supersonic transport will provide new weather challenges. We continue to make improvements in developed capabilities using new technologies for sensing and computing while taking on these new challenges. As we have improved safety, system capacity and efficiency have become new frontiers for aviation weather research. As runways and airspace have become more crowded, the system is less tolerant of wasting capacity through weather forecasts that are not accurate. Also, efficiency of operation, driven by cost of fuel and by concern for the environment, is strongly influenced by the accuracy of the weather forecast. In the future, route optimization may require consideration of the cost of climate change, including such forecasts as contrail production and dissipation. We have barely scratched the surface of what is possible as we learn how to integrate and couple weather information into aviation decision-support systems. Fortunately, the aviation weather community can continue to leverage the substantial progress of the overall weather enterprise to help meet the challenges that we anticipate for the future.
4. Applications to national security
National security has driven and benefited from myriad advancements in applied meteorology and climatology in the United States over the last century. Two-way connections between security and science have pervaded funding, institutions, people, data, technology, techniques, and culture. Some connections are familiar and historic, such as the forecasts for D-Day (Ross 2014) or radar’s application in meteorology (Buderi 1996; Rogers and Smith 1996), as also mentioned in section 3 and the chapter in this monograph dealing with observations (Stith et al. 2019) for advancing aviation meteorology. Other connections are less familiar yet worth recalling.
a. Introduction to security applications
National security is most starkly manifest in military contexts, especially during wartime, when weather and climate invariably play important roles (Winters et al. 1998). Much of the material below focuses on those applications. Brief attention is also given to other contexts, such as security of the economy, infrastructure, health, and resources such as food, water, air, and energy (e.g., Romm 1993; Executive Office of the President 2015; O’Sullivan 2015; Ammerdown Group 2016).
b. Introduction to security applications
1) A century ago…
A century ago, much of the world was recovering from WWI. Although primitive by modern standards, applied meteorology was used during that war to plan and conduct certain operations. Observations from ground stations, pilot balloons, and aircraft (Dastrup 1992; Pedgley 2006) were an asset for artillery, long known to be sensitive to weather—at long ranges meteorological error contributes up to two-thirds of artillery’s total trajectory error budget (Wahl 2006; Jones 2017). New weather sensitivities were brought to the fore by aircraft, poisonous gas, and other innovations (Pedgley 2006; Ross 2014). Developments in technologies such as radio set the stage for new methods of observation (DuBois et al. 2002). Unfortunately, effective weather prediction during the war was hampered by immature and flawed conceptual models of the atmosphere (Pedgley 2006). Thus, at that time commanders distrusted forecasts (Ross 2014).
Since WWI, advancements in applied meteorology and climatology have been transformational for national security, if sometimes achieved through fits and starts. Between WWI and WWII, progress in atmospheric science was “stagnant,” to use Harper’s (2012) expression, or “less than spectacular,” to use Smagorinsky’s (1970). Severely restricted budgets during the Great Depression hampered integration of new methods of analysis and forecasting (Harper 2012). Then WWII set in motion great and lasting changes, one of which was defense-related patronage for science (Mazuzan 1994; Buderi 1996).
2) Patronage in the interest of national security
WWII convinced nations that victory in battle can hinge on science and that scientists can be oriented toward solving military problems (Sapolsky 1990; Buderi 1996; Genuth 2001). That realization fostered a sometimes strained but often powerful symbiosis among academe, the federal government, and industry that became the model for funding and conducting scientific research in the United States into the twenty-first century.
After a temporary ebb in federal patronage immediately after the WWII, what arose was the practice of tapping a variety of governmental sources to fund academic research (Sapolsky 1990). Between WWII and the Vietnam War, the armed services in particular were liberal patrons of the sciences. In 1949, defense-related sponsorship accounted for 96% of all government funding for physical science at universities (Genuth 2001). ONR was a particularly important patron in the first years following WWII, being at the time the only organization “of any significant size” available to support civilian scientific research (Sapolsky 1990). The influence of the ONR on science even extended to shaping policies, some of which are still in effect. In 1950, ONR’s Alan Waterman was named the first director of the National Science Foundation (NSF), aspects of which were modeled after ONR (Mazuzan 1994).
Although over the decades the differences in cultures, timelines, priorities, etc. between military sponsors and scientists challenged both sides, and current events such as social opposition to the Vietnam War tempered universities’ enthusiasm for defense-related projects for a time (Sapolsky 1990), the patronage continued, modulated by the rise and fall of perceived threats to U.S. national security. In the wake of Sputnik’s launch in 1957, for example, national resolve to lead the space race boosted funding for the NSF (Mazuzan 1994) and inspired a new appreciation for education in technical fields.
3) Observations, observing systems, and communications
As mentioned in section 3, when the U.S. Weather Bureau was established in 1870, it was established in the War Department. The Department’s Signal Corps was viewed as offering the personnel, technical expertise, infrastructure, and discipline well suited to collecting and communicating weather observations (Fleming 2000). Although that arrangement was motivated partly by the particular situation of the Corps and its leadership at the time (Fuller 1990), intelligence about weather has long been recognized as a key element of national security.
Encrypting your own observations and decrypting your enemies’ is important warcraft. Intercepted German weather information during WWII was one source of the “cribs” that Allies used to break Enigma codes (Giles 1987; Gaj and Orlowski 2003; Ross 2014). Spies and partisans behind enemy lines can be key sources of weather intelligence (Ross 2014). During the Cold War, the National Security Agency included a weather unit focused on secretly collecting and disseminating weather data (Richelson 2013). During the mid-1960s, the North Vietnamese encoded their weather reports in a way that simultaneously provided correct information to those who knew the code but deceived with plausible incorrect information those who did not (Richelson 2013). Observations are especially important when forecasts are unavailable or prove consistently unreliable. That is one reason why reconnaissance flights became a mainstay of operations in theater from the beginning of military aviation. In WWII, reconnaissance aircraft would survey the general weather in advance of operations, determining the bases and tops of clouds and looking for the signatures of cold fronts (Giles 1987). In the Korean War, simple visual reconnaissance was judged consistently superior to operational forecasts of conditions needed for air cover, so in 1952 Bomber Command stopped using forecasts as a factor in assessing air cover during mission planning (Fuller 1974). During mission planning, failure to communicate basic situational awareness of weather can have grave consequences. In April 1980 the infamously disastrous attempt to rescue the American hostages in Iran was plagued by a variety of mishaps and unexpected difficulties, one of which was the haboob (an intense dust storm) that surprised aircrews. Military meteorologists knew such storms were possible yet failed to convey that risk (Radvanyi 2002).
Many of the instrument platforms used for observations a century ago are still used today, albeit in refined forms. Other modern platforms would not have been imagined a century ago. Radiosondes introduced during WWII were an improvement over kites, tethered balloons, pilot balloons, and sonde designs from the 1930s (DuBois et al. 2002; Stith et al. 2019). Starting in 1959, rockets fired from U.S. Department of Defense (DOD) test ranges were used to sound the atmosphere to altitudes of 25–60 km MSL and beyond, much higher than standard sondes lofted by balloon (Webb et al. 1961; Webb 1981). Data collected by rockets were an important step toward laying a foundation for national security in space. The maturity of rocket and television technologies by the early 1960s put satellite observations within reach. In the opinion of Fett (2014), the invention of the satellite has proven as important to meteorology as the invention of the telescope to astronomy. NASA’s launch of TIROS-1 in 1960 certainly revolutionized meteorology, and the subsequent Television Infrared Observation Satellites launched later that decade provided dramatic views of Earth and its weather that were only imagined to that point (e.g., Fritz and Wexler 1960). However, concerned that a civilian weather satellite program of NASA, the U.S. Weather Bureau, and partners would not meet all of the requirements dictated by national security, in the early 1960s the U.S. DOD started its own Defense Meteorological Satellite Program (DMSP; Hall 2001). DMSP satellites provided vital weather intelligence from 1962 through the 2010s. At the height of the Cuban missile crisis, DMSP satellite images enabled safer, more efficient aerial reconnaissance (Hall 2001). During the Vietnam War, Lieutenant General Momyer contended that “the weather [satellite] picture is probably the greatest innovation of the war” (Fuller 1974, p. 24). The wealth of high-resolution data available starting with the block V editions of DMSP satellites in the 1970s was the basis for the Navy Tactical Applications Guides (NTAGs), an 11-volume set of training documents for forecasters, developed by the Naval Research Laboratory from the mid-1970s through early 1990s (Fett et al. 1997).
Contemporary with early weather satellites, it was also in the 1960s that modern lidar (Weitkamp 2005) was first applied in meteorology (e.g., Fiocco and Smullin 1963; Goyer and Watson 1963). Since then, different designs of ground-based, airborne, and spaceborne lidars have been used or studied as part of many national security efforts (e.g., Carr et al. 1999; Peglow and Molitoris 1997; Cionco et al. 1999; Calhoun et al. 2006; Warner et al. 2007; Herrmann et al. 2013). Improvements to laser technology and better understanding of atmospheric effects on laser light are integral to directed-energy weapons and advanced communications (Coffey 2014; Kaushal and Kaddoum 2017). Atmospheric effects on electromagnetic waves have also been the focus of many studies (e.g., Haack et al. 2010; Wang et al. 2017).
Weather observations underpin basic military operations in more mundane instances as well. Wet-bulb globe temperature and wind chill are monitored to prevent heat- and cold-related injury (U.S. Army 2016). A variety of observation-based methods have been developed for predicting severe weather, especially lightning. Methods vary depending on whether the lead time of interest is from minutes to hours (e.g., Saxen et al. 2008; Stano et al. 2010), from hours to days (e.g., Liu et al. 2008), or from months to years, in which case climatographies can be employed (e.g., Saxen et al. 2008). Observing and predicting even innocuous weather is necessary for planning, executing, and interpreting results from a variety of research, development, test, and evaluation activities in DOD.
4) Numerical weather prediction and other forms of forecasting
The birth and growth of numerical weather prediction (NWP; see Benjamin et al. 2019, this monograph) is a striking example of the close, sometimes serendipitous interplay between national security and applied meteorology and climatology. The first NWP calculations were performed on the famed Electronic Numerical Integrator and Computer (ENIAC; Charney et al. 1950). ENIAC (Goldstine and Goldstine 1946) was not designed and built for the meteorological community, but for the Ballistic Research Laboratory at Aberdeen Proving Ground, Maryland. It was used to calculate artillery firing tables and the yields of hydrogen bombs. To the visionary mathematician John von Neumann, ENIAC presented quite a different opportunity. He, Jule Charney, and colleagues had for years appreciated that electronic computing could be applied to dynamical meteorology (Platzman 1979). After WWII, funded by the U.S. Navy, von Neumann and his team recast theory into a form suitable for numerical methods. In 1950, with the support of the U.S. Army Ordnance Department, the team made the barotropic calculations on ENIAC that forever transformed weather forecasting and atmospheric science in general (Harper 2012). By 1956, operational NWP forecasts were being issued by the Joint Numerical Weather Prediction Unit, a partnership among the Air Force Air Weather Service, the Naval Weather Service, and the U.S. Weather Bureau (Shuman 1989). The decades that followed saw development of increasingly complex and skillful models for operational NWP, and increasingly specialized models for research. Some models served in both capacities.
Starting in the early 1970s, the U.S. Navy used limited-area models to study marine fog, clouds, and the atmospheric boundary layer (Rosmond and Barker 2014). From the late 1970s through the 2010s, naval operational models and data assimilation systems from mesoscale to global were developed, implemented, and improved (Hodur 1982; Hogan and Rosmond 1991; Bayler and Lewit 1992; Hodur 1997; Holt et al. 2011; Reynolds et al. 2011; Hogan et al. 2014). Among the U.S. Navy’s emphases has been coupled modeling of the atmosphere–ocean system and ensemble prediction.
Since WWII, the U.S. Air Force has provided weather support for U.S. Army operations as well as its own (Moyers and White 2004). Among the NWP models used operationally by the U.S. Air Force were two groundbreaking community mesoscale models: the Fifth-generation Pennsylvania State University–NCAR Mesoscale Model (MM5; Dudhia 1993) was run operationally starting in 1997 (Air Weather Association 2012) and the Weather Research and Forecasting (WRF) Model (Skamarock et al. 2008) starting in 2006. The U.S. Air Force contributed to the development of both systems. As with the U.S. Navy, ensemble prediction has been an important part of U.S. Air Force modeling.
Apart from the support provided by the U.S. Air Force, the U.S. Army has also pursued its own weather modeling, in particular for highly tailored applications outside of typical operations. Two examples are the diagnostic Three-Dimensional Wind Field (3DWF) Model, which calculates fast solutions for complex natural or built environments (Wang et al. 2010), and the Atmospheric Boundary Layer Environment (ABLE) Model, which simulates microscale processes in the atmospheric boundary layer (Wang et al. 2012; MacCall et al. 2014; MacCall and Wang 2014).
Aside from traditional NWP, other methods of weather prediction were also employed over the last century. Applied climatology provided important guidance during WWII (Jacobs 1947). Theoretical approaches were used to estimate requirements for solar-operated condensers to distill seawater for use in life rafts on aircraft and to determine where deicing boots should be standard on aircraft at certain times of the year. Statistical approaches were used to draft maps of secondary targets for bombing missions, conditioned on unfavorable weather at primary targets. For some climatological applications during the war, it was important to characterize a typical day of weather through synoptic (sometimes called “synchronous”) climatographies, ensuring that variables depended on each other and were spatially coherent. There are more recent applications of conceptually similar techniques, such as self-organizing maps (e.g., Hewitson and Crane 2002). Climatological studies were possible during the 1940s because of abundant synoptic records available from stations around the world dating to the last few decades of the nineteenth century (Jacobs 1947). To organize the data and streamline analysis, during the Great Depression the Works Progress Administration funded staff to manually transfer those synoptic observations to punch cards that were then fed to IBM machines for sorting (Jacobs 1947; Whitnah 1961).
The data on punch cards were also used for analog forecasting. Analogs—and very controversial 30-day forecasts—were a favorite of Irving Krick, a member of the U.S. Army Air Force’s contingent on the international team charged with predicting conditions for the landings at Normandy on D-Day (Stagg 1971; Bates and Fuller 1986; Fleagle 2001; Petterssen 2001; Ross 2014). Responsibility for the legendary forecasts was distributed among three centers that collaborated over scrambled telephone lines from separate locations to avoid presenting a single target to German bombs (Ross 2014). Leading the effort was James Stagg of the Royal Air Force. Stagg grappled with how to incorporate into his consensus reports the subjective uncertainty in the three centers’ sometimes starkly different predictions (Ross 2014). Characterizing uncertainty in objective, useful terms is now a primary goal of modern ensemble forecasting (Toth et al. 2001; Kalnay 2003; Eckel et al. 2010; Reynolds et al. 2011; Knievel et al. 2017).
No matter the source of a forecast, WWII drove home to many in the national security sector that the utility of weather data depends heavily on how they are analyzed, organized, and presented. Jacobs (1947) argued that an effective presentation should be “attractive,” even “arresting,” and that the information must be actionable. Success is more likely when end users provide input during development of an application, and when developers provide interpretation or training during an application’s use. Jacobs (1947) warned that without this mutual engagement, the “weather naiveté” of military planners can lead to misuse of weather intelligence. For example, temperature specifications in the design of materiel during WWII were sometimes erroneously based on mean temperatures calculated from very sparse observing networks, with insufficient appreciation for how finescale phenomena and processes produce extreme conditions that depart from smooth climatographies.
5) Weather modification, intended and otherwise
Given weather’s influential, sometimes decisive role in battle (Winters et al. 1998), it is not surprising that militaries have sought to control the atmosphere and make it an ally—or at least less of an enemy (e.g., House et al. 1996). During prescient discussions in 1945 about how electronic computing might revolutionize meteorology and climatology, controlling the weather was envisioned alongside NWP (Fleming 2016).
Fog at airports plagued bombing missions during WWII. Dissipating it by heating air (see also section 2), deemed prohibitively expensive when first considered in the nineteenth century, was revisited during Project Fog Investigation and Dispersal Operation (FIDO). A system of pipes and jets were installed and used successfully at more than a dozen British airfields (Giles 1987; Fleming 2010). The largest installations burned through in excess of 200 000 gallons (757 000 L) of petrol per hour, lining runways with thunderous walls of flame. Returning pilots had visions of descending into hell (Fleming 2010).
In Vietnam, starting in 1967, the United States seeded clouds with silver iodide in an attempt to prolong the downpours of the summer monsoon and thereby limit the enemy’s flow of troops and supplies (Fuller 1974; Fleming 2010). This issue of trafficability, a vehicle’s ability to move across terrain (e.g., in a supply convoy), is basic to military operations (e.g., Sanderson 1954; Stagg 1971; Stinson 1981) and has motivated cross-disciplinary applied research combining meteorology, climatology, and pedology (e.g., Jacobs 1947; Rula et al. 1963).
Cloud seeding was envisioned for a time as potentially a way to weaken hurricanes and reduce their wind damage. This hope motivated Project Stormfury (Willoughby et al. 1985), conducted from 1962 through 1983 with support from the U.S. Navy for part of its duration. The hypothesis was that seeding convective clouds outside an eyewall would rob it of vigor, establish a new wall farther from the storm’s center, and thereby reduce the storm’s wind speeds. Aircraft seeded four hurricanes. Some results appeared encouraging at the time. However, it is now known that eyewall replacement cycles happen naturally, and that supercooled liquid is too scarce in hurricanes, and ice too abundant, for seeding to be effective. Although there were good scientific outcomes from Stormfury, evidence for its motivating hypothesis was not among them. There were political outcomes, too. In 1963, during Stormfury’s second season and in the wake of the previous year’s Cuban missile crisis, Fidel Castro accused the United States of using weather modification to alter the track of Hurricane Flora (Fleming 2010). After crossing Haiti, Flora stalled and meandered slowly over Cuba, devastating the country with winds of 30–45 m s−1 for 100 h or more and maximum rainfalls > 2 m before departing to the northeast (Dunn et al. 1964). Flora was not, in fact, seeded. Its odd track was purely a result of the meteorological setting (Dunn et al. 1964). Still, the prospect of using hurricanes or other atmospheric phenomena as a weapon was an international concern. Mexico blamed their drought on cloud seeding by the United States. In the end, the United States was compelled to prohibit future Stormfury operations in certain regions. The project’s director, Bob Simpson, lamented that the restrictions severely hampered the group’s experiments. Later efforts to extend Stormfury into the Pacific Ocean region were halted by yet more international opposition (Willoughby et al. 1985).
Unintentionally modifying weather and climate has also been a concern for national security—modifying by nuclear war, for example. Although studies and experience following WWII suggested that individual nuclear detonations cannot trigger persistent global changes in weather and climate (Roberts 1970; Glasstone and Dolan 1977; Kunkle and Ristvet 2013), widespread nuclear war could be another matter entirely. Research in the 1980s (Crutzen and Birks 1982; Turco et al. 1983; National Research Council 1985) suggested that smoke from firestorms, which might be triggered if a substantial fraction of the world’s nuclear arsenal were detonated, could catastrophically alter Earth’s climate. What was dubbed nuclear winter quickly became the subject of intense public and political interest (Badash 2009; Dörries 2011). Lacking sufficiently similar empirical analogs to nuclear war, researchers relied on numerical modeling (Cotton and Pielke 1995) and still do. As models become more sophisticated, as physical processes are more realistically represented (e.g., ocean dynamics), and as assumptions and uncertainties are better addressed (e.g., the role of clouds, and the composition and distribution of smoke from firestorms), the plausibility and character of a hypothesized nuclear winter continue to evolve (e.g., Robock et al. 2007b,a; Toon et al. 2007; Mills et al. 2014; Pausata et al. 2016).
c. The present state of meteorology in security applications
Asymmetric threats to national security on a scale smaller than nuclear war, especially threats from terrorism, are an emphasis of more recent work in applied meteorology and climatology. (The term asymmetric is often used to describe the threat that a smaller, less powerful belligerent poses to a larger, more powerful belligerent, especially when the former resorts to unconventional tactics. Rebellions and terrorism are examples.) Concern over airborne gases and particles from natural, accidental, or malicious sources motivated field campaigns such as URBAN 2000 (Allwine et al. 2002), Joint Urban 2003 (Allwine and Flaherty 2006), MID05 (Allwine and Flaherty 2007), and Pentagon Shield (Warner et al. 2007), all of which focused on understanding and improving modeling and validation of transport and dispersion (T&D) by microscale circulations in urban settings.
The challenges of urban T&D have spurred development of a range of specialized models, in some cases because operational NWP on standard computers does not provide guidance quickly enough for emergency responders and other authorities. Such models include the Hybrid Single-Particle Lagrangian Integrated Trajectory Model (HYSPLIT) developed by the National Oceanic and Atmospheric Administration (Stein et al. 2015), the Quick Urban and Industrial Complex (QUIC) dispersion modeling system (Brown 2004), the Hazard Prediction and Assessment Capability (HPAC) developed by the Defense Threat Reduction Agency (DTRA 2008), and the Open Burn/Open Detonation Dispersion Model (OBODM) for simulating the effects of DOD materiel burned or detonated in the open air (Bjorklund et al. 1998).
To address a different kind of asymmetric threat, the Naval Research Laboratory developed an application called the Pirate Attack Risk Surface (PARS). Intelligence about the states of the atmosphere and ocean are dynamically combined with intelligence about shipping patterns and historical and recent behavior of pirates to predict where and when pirates are likely to attack ships around the Horn of Africa (Hansen et al. 2011). Predictions are probabilistic and account for uncertainties in the input fields. PARS maps are designed for operational use by interdiction forces.
Weather modification remains an area of focus, within the bounds permitted by international agreements such as the Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques of 1978 (e.g., House et al. 1996). Many of the potentially modifiable phenomena faced by militaries a century ago remain a challenge today, such as fog, clouds, and precipitation. Other concerns, such as space weather and the atmosphere’s effects on directed-energy weapons, are younger.
Natural hazards that are beyond practical modification can nevertheless be used to tactical advantage if one predicts and adjusts to them better than does a foe. For example, regional and global monitoring and modeling of airborne volcanic ash is important to military and civilian aviation security (e.g., Office of the Federal Coordinator for Meteorological Services and Supporting Research 2007).
d. The future of meteorology in security applications
As with past progress, future progress in applied meteorology and climatology for national security will be inextricably tied to changes in society and politics, infrastructure, the environment, technology, and science in general. By 2030, populations in 41 cities are projected to exceed 10 million, and 60% of world’s population will be urban (United Nations 2016; Haupt et al. 2019a, Part II of this monograph chapter series). Urban weather and climate will figure prominently in national security (Knapp et al. 2016). Modeling will continue its progress to finer spatial and temporal scales, where stochastic processes will present new challenges, including to data assimilation and model validation.
Climate change and related ecological and social disruption are concerns for national security on many fronts (Malone 2013), among which are human health (Willett and Sherwood 2012), military and civilian infrastructure (Interagency Security Committee 2015), agriculture, transportation, and energy (U.S. Global Change Research Program 2009; Haupt et al. 2019a,b, this monograph). Pandemics and disease vectors and how climate change might influence them (e.g., Monaghan et al. 2016) will continue to be a concern for domestic national security (U.S. Department of Homeland Security 2014). The United States and other nations will need to ensure stable, sufficient supplies of food and water in the face of a variety of threats (McElroy and Baker 2014). Navigation in Arctic waters and the state of coastal infrastructure are expected to be quite different on a warmer planet with higher seas (Executive Office of the President 2013; Goldstein 2016).
Accompanying these challenges will be unforeseen or only dimly imagined advancements in applied meteorology and climatology to help society meet those challenges. As observed by Johnson (2000) and described in detail by others (e.g., Shuman 1989; Fleming 2016), technology is often at the core of such advancements. In the coming decades, observing platforms such as crewless aircraft (i.e., drones or unmanned aerial vehicles—see section 3i of this chapter) will proliferate throughout many sectors of society, to the benefit of environmental monitoring and modeling. What satellites can sense and resolve will continue to improve, and retrieval algorithms will be developed that are more sophisticated and accurate. Constellations of smaller, comparatively cheaper satellites will enhance our view of poorly observed regions of the planet Shorter-range remote sensors such as lidars also will become smaller and cheaper, as well as more transportable and durable. For computing applications that can take advantage of them, graphical processing units promise accelerations by orders of magnitude. Quantum computing, as it matures, could revolutionize how we approach certain classes of problems. Machine learning will help us make sense of seemingly boundless, unmanageable data archives. Military technology itself will become more advanced (e.g., directed-energy weapons), as will civilian technology (e.g., autonomous vehicles), presenting new weather sensitivities. Decision-support systems to address those sensitivities will be more sophisticated and automated; their biases and uncertainties will be clearer and more easily mitigated. Well-calibrated probabilistic approaches will be the norm.
5. Summary and concluding thoughts
These examples of successful and critical applications in meteorology, and how, in some cases, they have the ability to change the course of world events, are quite profound. At one time in history, humankind was content to observe the weather. As time passed, they began to predict the weather. Then very specific forecasts were needed for each particular application. Several have been described here.
We have seen how early scientific discoveries led to a great interest in modifying the weather in ways that are beneficial to society and also have seen how that interest led to research support for ever more detailed scientific understanding. It is now clear that early ideas about cloud seeding were simplistic. With the advent of modern ground-based and airborne instruments for making critical measurements, and with the advent of modern high-performance computing for numerical modeling, we have much better insight into the complexities and possibilities involved. Based on this understanding, weather modification techniques have been developed and tested with the goals of increasing alpine snowpack and dispersing supercooled fog. We have a reasonable idea of how well these techniques work and under what conditions. In other areas of interest, such as stimulating rainfall and suppressing hail, research and testing remain incomplete, and results are inconclusive.
Detailed knowledge of current and future weather is also essential for aviation, which has spurred development in observational and forecasting techniques that have benefitted many other sectors. As we saw in section 3, ever since the Weather Bureau provided a seasonal forecast of wind direction at Kitty Hawk for the 1903 flights of the Wright brothers, the aviation sector has been requesting more and more sophisticated, skillful weather information. Now, not only does the industry require basic observations and predictions of wind but also details regarding turbulence, icing potential, convective activity, lightning, microbursts, and more. These requests spurred research into the processes so that forecasting could advance. They also catalyzed improved instrumentation, including weather radars. So not only did meteorology feed the progress of aviation, but the reverse is certainly true as well.
Section 4 demonstrated the criticality of meteorological information for security applications. The advances in that arena in the past 100 years since WWI have been profound. The defense agencies have been a patron of the atmospheric sciences, prompting many innovations that have found a plethora of applications. Such advances were seen in instrumentation, including radar, lidar, and satellite platforms; transport and dispersion sensing and modeling; and probabilistic forecasting, including ensemble applications in NWP. The funds supplied by military agencies have allowed much basic and applied research in the field. Those same agencies are necessarily forward looking, encouraging rapid adoption of best practices and looking toward future applications, including in weather modification and mitigation of the impacts of the changing climate.
These sections show an important component of the history of advances in meteorology. Indeed, it was the interest in weather modification that spurred research in cloud physics and convective processes. This knowledge then fed into progress in weather forecasting, including severe weather, flooding, and aviation hazards such as icing, and eventually into climate models. These needs and advances intersected at points in time, providing the impetus and synergy necessary to make punctuated advances in the state of the science. Thus, the science and applications feed each other and require that experts in different fields of science and engineering work together, advancing the science in ways that basic research cannot accomplish alone.
As these sections have demonstrated, Walter Orr Roberts was correct. Science does serve society, and we will always find scientists and engineers who are passionate about making that connection and solving the technical problems in order to make the world a better place.
Author S. Haupt was supported, in part, by NCAR funds from the National Science Foundation. Author J. Knievel was funded by the U.S. Army Test and Evaluation Command and the Air Force Research Laboratory through Interagency Agreement with the National Science Foundation, which sponsors NCAR. Thanks are given to the following people at NCAR: Michael Flanagan for quickly finding many obscure references, Jeremy Sauer for insights about the future of computing in atmospheric science, and Michael Dixon for discussions about the history of radar. Thanks are also given to Jason Nachamkin and James Doyle at the Naval Research Laboratory and Randal Pauley at the Fleet Numerical Meteorology and Oceanography Center for help on the history of applied meteorology and the U.S. Navy; to Pamela Clark, David Knapp, and Robb Randall at the Army Research Laboratory respectively for supplying documents and ideas on themes for the chapter, for a vision of the future of modeling in the U.S. Army, and for information on applied climatology; to Steven Rugg and Evan Kuchera at the 557th Weather Wing for help with the history of U.S. Air Force Weather; and to Frank Gallagher III at NOAA for thoughts on the future of satellite meteorology. The authors also thank the reviewers—Brant Foote, Andrew Heymsfield, and Scott Sandgathe—who all made constructive suggestions that have helped to improve the manuscript.