Applied meteorology is an important and rapidly growing field. This chapter concludes the three-chapter series of this monograph describing how meteorological information can be used to serve society’s needs while at the same time advancing our understanding of the basics of the science. This chapter continues along the lines of Part II of this series by discussing ways that meteorological and climate information can help to improve the output of the agriculture and food-security sector. It also discusses how agriculture alters climate and its long-term implications. It finally pulls together several of the applications discussed by treating the food–energy–water nexus. The remaining topics of this chapter are those that are advancing rapidly with more opportunities for observation and needs for prediction. The study of space weather is advancing our understanding of how the barrage of particles from other planetary bodies in the solar system impacts Earth’s atmosphere. Our ability to predict wildland fires by coupling atmospheric and fire-behavior models is beginning to impact decision-support systems for firefighters. Last, we examine how artificial intelligence is changing the way we predict, emulate, and optimize our meteorological variables and its potential to amplify our capabilities. Many of these advances are directly due to the rapid increase in observational data and computer power. The applications reviewed in this series of chapters are not comprehensive, but they will whet the reader’s appetite for learning more about how meteorology can make a concrete impact on the world’s population by enhancing access to resources, preserving the environment, and feeding back into a better understanding how the pieces of the environmental system interact.
The ancient Greek philosopher Empedocles conjectured that the world was composed of four primary elements—air, fire, water, and earth. He surmised that these elements were not destructible and unchangeable, but rather could be superimposed to change structure. This pre-Socratic theory, originated around 460 BC, persisted for over 2000 years. Although we now have a deeper understanding of the nature of the world and cosmos, one can imagine how ancient humankind developed this earth–air–fire–water philosophy based on the observational ability that they had at the time.
This third part of the AMS 100th Anniversary Monograph Series focusing on applied meteorology treats some of the topics that may have led to such philosophy. Perhaps it became obvious that the fiery sun provided energy for Earth and its atmosphere. The earth produced food via agriculture but depended highly on movements of air to bring weather that could produce rain and provide the basis for transforming seeds and earth to food. Wildland fires could change all of that. Of course our understanding of these issues has greatly evolved, and this chapter treats how that understanding has progressed over the past 100 years. We now understand that the sun not only provides Earth’s energy, but also produces space weather that impacts Earth and its atmosphere.
The rapid increase of available environmental data has enabled rapid advances in our understanding of processes. Similarly, advances in computational power have made possible, and require, new techniques such as artificial-intelligence (AI) methods, higher-resolution computational gridded models, and the coupling of complex processes. Examples include coupling the atmosphere and fire processes for wildland fire modeling, atmosphere and ocean processes for hurricanes simulation, and many other foundational climatic processes or solar and atmosphere processes to predict the impact of space weather. As humanity strives to manage the complex Earth processes, it becomes more important to apply these detailed meteorological modeling capabilities. This coupled modeling approach is essential to providing accurate simulations and forecasts for the applications in this chapter: agriculture, wildland fire modeling, and space weather, as well as for a plethora of other applications.
Section 2 of this chapter is related to agriculture, food security, and how meteorological and hydrological knowledge is used to enhance production in an effort to help feed the world’s population. In turn, as humans change land use for agriculture, the environment is impacted, and we must understand these changes to avoid unintended consequences. This section also continues the theme of Part II of this series (Haupt et al. 2019b), which dealt with topics related to growing populations. Section 2 culminates with a discussion of the food–energy–water nexus and its susceptibility to a changing climate.
Section 3 discusses our current understanding (and limits to understanding) of space weather. It suggests that studying the sun’s atmosphere could be accomplished using similar methods to what has led to better understanding Earth’s atmosphere. Continuing on the fiery theme, section 4 deals with wildland fire and how modeling this important and deadly phenomenon can impact how we deal with it. But because the fire itself generates weather, fully coupled models are required to capture this important phenomenon.
Section 5 of this chapter is a bit different in that it discusses the use of AI in the environmental sciences. Although it is less about how applications of meteorology have changed and served an important human topic, it looks forward into how programming machines to think like humans, or even unlike humans, can enhance how we make forecasts, or emulate processes in our models, or optimize some aspect of our models or workflow, or recognize patterns in our world. It also allows us to interact with our burgeoning data in new ways, uncovering new insights through clustering and nonlinear analysis. This section looks to the future, but it also reverts to the past when science relied more on finding patterns in nature. Concluding thoughts and consideration of some prospects for the future appear in section 6.
This chapter is the final one of a three-part series on applied meteorology. In the first chapter, we considered some of the most basic and first-addressed application areas: weather modification, aviation applications, and security applications. Knowledge of meteorology enabled each of these applications, and the study required to progress the applications enriches our understanding of the meteorological processes involved. In the second chapter, we dealt with using meteorology to find solutions to problems generated by a growing population—urbanization, air pollution, energy, and surface transportation. We saw not only that meteorology provides useful information for these applications, but also that each of these issues itself impacts the environment in ways that must be understood and carefully managed. Here in the third part of this series we continue along the lines of understanding the science behind the applied systems in how we consider space science; dealing with the problems, such as in wildland fire management and agriculture applications; and applying new techniques such as AI to those problems. As the last in a series of chapters on applied meteorology, we must acknowledge the lack of completeness. There are many additional topics that are not covered in this series, because of lack of space and time as well as the fact that some are touched on in other chapters of this monograph. For instance, little attention is paid to hydrological, climatological, or social science applications because they are treated in other chapters of this monograph.
2. Applications in agriculture and food security
Food is a basic human need. To feed increasing populations, global agricultural output has more than tripled in volume in the last 50 years and real prices have fallen (Fuglie and Wang 2012). In the United States, even starting from already high levels of productivity, farm production more than doubled between 1948 and 2011 (Wang and Ball 2014). By 2050 population growth, mainly in the developing world, will necessitate an increase in food production of 59%–98% (Valin et al. 2014). With limited land available for planting more crops, technological advances are necessary to improve practices and efficiencies across the entire food system. Providing meteorology information is critical to continuing to optimize productivity.
Information on constantly changing weather conditions, such as probability of precipitation, temperature information, and so on are essential basic information for models of crop production. For instance, crop models such as the Parallel version of the Decision Support System for Agrotechnology Transfer (pDSSAT; Elliott et al. 2015) use daily weather data (maximum and minimum temperatures, rainfall, solar radiation, winds, and humidity) and farm management information to examine the status of agricultural systems, provide a framework to monitor crop progress, identify problem areas and opportunities, and contribute to a multifaceted monitoring system with machine-based learning incorporating remote sensing and crop model outputs. Meteorological systems can feed these crop models with weather and climate information including traditional surface-based observations as well as satellite-based Earth observations. In addition, models of weather and climate can provide useful information for prediction. In turn, the land use modifications that are part of agriculture can alter the weather and climate, which should be included in our models. These aspects are treated here, as well as the nexus between food, energy, and water.
b. Use of satellite data for agriculture
Remote sensing technologies are poised to play a larger role in food security, through such practices as better crop water monitoring (Bastiaanssen and Steduto 2017). Satellite observation systems have the unique capability to inform critical forecasts and decision-support tools for the agriculture sector. Currently many farmers lack access to timely agricultural forecasts and decision-making tools that could help them make critical choices throughout the growing season, including what to plant, when to plant, and when to irrigate, as well as warning of impending catastrophic weather events and providing yield forecasts to aid in price negotiations with intermediaries.
At the end of 2016, 374 Earth observing satellites were operational (Pixalytics 2017). The main instruments useful for agriculture and food security are classified either as multispectral or microwave; however, planned hyperspectral sensors have the potential to revolutionize remote sensing contributions to food security as many spectral indices focus on narrow bands (e.g., Harris Geospatial Solutions 2017).
An important application of remote sensing data is for monitoring crop conditions, biophysical variables, and crop yield (e.g., Chen et al. 2016; Gitelson 2016; Hatfield et al. 2008). Spectral indices, such as normalized difference vegetation index (NDVI), are well-known and long-standing successful examples of using remote sensing for crop health identification (e.g., Tucker 1979). The growth in spectral radiances has greatly increased the suite of potential geophysical inversion capabilities, and numerous crop health conditions can now be monitored. For example, alterations to NDVI and other spectral indices show strong relationships with the fraction of absorbed photosynthetically active radiation (Viña and Gitelson 2005), which is a critical index for inclusion in production efficiency models (Roujean and Breon 1995). Remote sensing techniques are also showing considerable value in identifying crop pests and diseases (e.g., Mahlein 2016), including powdery mildew (Yuan et al. 2016) and white fly (Nigam et al. 2016), among others. Solar-induced fluorescence (SIF; Yang et al. 2015) indicates photosynthetic activity with space-based monitoring capabilities (Guan et al. 2015), and gross primary productivity (GPP; Running et al. 2000) can indicate biomass and carbon allotment for use in crop modeling. Soil moisture products from NASA’s Soil Moisture Active Passive (SMAP) and the European Space Agency’s Soil Moisture and Ocean salinity instruments could provide much improved global estimates.
Remote sensing information for agricultural lands enables improvements to model initializations at the beginning of the season and helps to constrain a model’s properties (e.g., biomass, leaf area, soil moisture, and photosynthetic rate) to avoid the effects of drift over the course of a season. Figure 24-1 is an example of full-resolution satellite data (NDVI in the figure), with the option to select other relevant weather data.
c. Modeling for agriculture applications
Coupled atmosphere–hydrosphere–crop models are increasingly being used to support agricultural decisions. The Weather Research and Forecasting (WRF) numerical weather prediction model has been augmented for interaction with crop models. These WRF-Crop modeling capabilities can be integrated with remote sensing data, land-data assimilation systems, and prediction systems to provide short-term and seasonal monitoring and prediction of crop yield, crop-specific water and irrigation demand, soil temperature evolution, and impacts of weather–hydrology–crop interactions on crop growth. Such information can be utilized as input to various decision-support systems to include irrigation management, crop planting dates, and fertilization that would highly impact income of farmers and food security.
The High Resolution Land Data Assimilation System (HRLDAS; Chen et al. 2007) performs model-based data assimilation with remote sensing-derived soil moisture and other land surface parameters to generate soil and crop phenology conditions at field scales. The HRLDAS was used to produce real-time soil moisture and temperature in a NASA-funded agricultural pest-management decision-support system (Myers et al. 2008) and the forecast products were accessed by farmers in the central plains and Great Plains.
A wide range of weather and water prediction can be optimized to drive the HRLDAS at field scales. For each forecast location, the HRLDAS requires inputs of air temperature and moisture, wind speed, pressure, longwave and shortwave radiation, and precipitation, which will come from observations (e.g., radar-based precipitation estimates) and/or models. The HRLDAS merges a data assimilation system and a land surface process model. The underlying land model within HRLDAS is the community Noah-MP land surface model (LSM). It includes multiple options for many key land–atmosphere interaction processes affecting hydrology and vegetation to achieve accurate surface energy and water transfer processes (Niu et al. 2011; Yang et al. 2011). Noah-MP considers surface water infiltration, runoff, and groundwater transfer and storage, and is able to predict vegetation growth by combining a photosynthesis and a carbon allocation model that distinguishes between C3 (e.g., soybeans) and C4 (e.g., corn) plants (Niyogi et al. 2009, Collatz et al. 1991). Noah-MP now incorporates crop-growth models (Noah-MP-Crop) in order to provide crop-species-specific soil and crop yield conditions and several irrigation modules are under development within the Noah-MP community (Liu et al. 2016).
Noah-MP-Crop was evaluated against data obtained from the AmeriFlux sites at Bondville, Illinois, and Mead, Nebraska, as displayed in Fig. 24-2. The Bondville site is a nonirrigated corn/soybean rotation site and the Mead site is an irrigated corn/soybean rotation site. The results indicate that this model was able to reproduce observed surface heat fluxes (Fig. 24-2b) and seasonal evolution of crop phenology (LAI; Fig. 24-2a) and crop yield estimates (Fig. 24-2c). The Noah-MP-Crop model is now being expanded to include dynamic crop root depth and density and irrigation modeling capabilities to enhance the representation of crop–soil moisture interactions.
The HRLDAS with crop modeling capability is also able to integrate high-resolution data, providing more specific information about the crop types and management and their influence on crop growth and surface conditions. The 30-m national CropScape crop-type database has been implemented in HRLDAS. NASA’s SMAP mission provides the surface-layer soil moisture estimates (top 5 cm) at the spatial resolution of 36 km with an unprecedented accuracy of ±0.04 cm3 cm−3 even in areas with relatively high vegetation content (Entekhabi et al. 2010).
Knowledge of both current and future conditions of water and weather at field scales is critical for a wide spectrum of agricultural decision-support systems. Both near-term and seasonal weather prediction are daunting challenges, because current weather conditions and forecasts from the NOAA National Weather Service with spatial resolutions ranging from 3 km for short-term forecasts by the High-Resolution Rapid Refresh (HRRR) to 56 km for seasonal forecasts by the Climate Forecast System (CFS) are too coarse spatially and have significant uncertainties for agricultural management Nevertheless, the new-generation NOAA National Water Model (operational since August 2016) generates a 1-km forecast of streamflow and soil moisture up to 30 days, a significant step toward providing useable water information for agricultural sectors.
d. Modeling the impact of agriculture in Earth system models
Crop growth modifies the seasonal evolution of land surface characteristics such as albedo and emissivity, available water for evaporation, plant phenology (e.g., vegetation coverage and LAI), and the land–atmospheric exchange of heat, moisture, and greenhouse gases (GHG). These in turn affect surface heat and moisture fluxes, air temperature and humidity, precipitation, soil moisture and runoff, and heatwaves as evidenced in observations (e.g., Eddy et al. 1975; Barnston and Schickedanz 1984, Changnon 2001; Changnon et al. 2003; Haugland and Crawford 2005; Mahmood et al. 2008; DeAngelis et al. 2010; Alter et al. 2015a, 2018; Chen et al. 2018). Therefore, it is imperative to represent the agriculture in Earth system models (ESMs). A recent review of the community efforts in developing agriculture modeling frameworks was provided by McDermid et al. (2017).
The essential function of agriculture modeling in ESMs is to provide time–space variations of characteristics associated with crop growth and management that affect energy, water, and GHG fluxes within the atmosphere, biosphere, hydrosphere, and ecosphere to represent biogeophysical and biogeochemical interactions between land-use changes and climate systems. One approach to modeling agriculture in ESMs, mainly for the sake of simplifications and computational efficiency, is to prescribe the agriculture-induced changes in land surface characteristics such as albedo, soil resistance, LAI, vegetation cover, rooting depth, and soil moisture storage (e.g., Cook et al. 2009; Georgescu et al. 2011; Davin et al. 2014).
Recent efforts by, for example, Lokupitiya et al. (2009), Levis et al. (2012), and Liu et al. (2016) have focused on representing the dynamic crop growth and companion biogeophysical and biogeochemical processes in ESMs. They often involve coupling the Ball–Berry-type photosynthesis model (Ball et al. 1987; Collatz et al. 1991) and soil hydrology models with specific crop-growth (corn, wheat, rice, etc.) models, and developing crop-specific parameters required by these crop-growth models. The evaluation of those coupled soil–crop–climate models is often realized with data collected at field scales or from the Agricultural Model Intercomparison and Improvement Project (AgMIP; Rosenzweig et al. 2014). However, applying those models to meet the demand in capturing both large-scale agriculture patterns and regional differentiations in agriculture management methods such as crop rotation, irrigation, conservation tillage, and fertilization still remains a daunting challenge in today’s ESMs.
Nevertheless, agriculture management models including irrigation and human water management with varying degrees of complexities are being incorporated in ESMs (e.g., Leng et al. 2013; Drewniak et al. 2013; Leng et al. 2017) to enhance interactions among various Earth system modeling components (e.g., groundwater storage). Although those new agriculture modeling capabilities, as integrated modeling tools for investigating relevant science and sustainability issues, help advance the understanding of the nexus among food, energy, and water systems, the development of crop and agriculture management models in ESMs is still in its infancy. For instance, substantial uncertainties exist in the modeled temperature effects of irrigation on regional climate in ESMs (Kueppers et al. 2007; Sacks et al. 2009). Future priorities should focus on representing complex interactions between agricultural management and water-system components at various spatial and temporal scales.
e. Impact of irrigation on climate
Many areas of the world have seen a recent increase in agricultural intensity. Agricultural land cover accounts for roughly 40% of the global land cover (Ramankutty et al. 2008), and irrigation accounts for 70% of human consumptive uses of the world’s freshwater resources (Boucher et al. 2004; Velpuri et al. 2009). It accounts for approximately 60% of consumptive use of freshwater in the United States (Minchenkov 2009; Braneon 2014). Alter et al. (2018) recently showed that intensive agriculture over the latter part of the twentieth century was associated with significant increases in corn and soybean production in the Midwestern United States. At the same time, summers had more rainfall and colder conditions, suggesting a relationship between agricultural practices and regional climate. Applied climatology and agriculture have been connected for many decades. A range of agricultural practices such as farming, irrigation, livestock production, and land cover change impact hydroclimatic processes and biogeochemical cycles (Shepherd and Knox 2016).
In the past few decades, increases in irrigated agriculture have risen as opposed to rain-fed agriculture (Fig. 24-3). The average value of production for irrigated farmland is estimated to be more than 3 times that for dryland (rain fed) farmland (Schaible and Aillery 2012), which is one reason for the upward trends in irrigation. This form of land cover change has the ability to modify regional climate, with recent studies suggesting that forcing from irrigation is a stronger climate change forcing than greenhouse gases for some regions (Alter et al. 2018).
The increased amount of water available at the surface via irrigation has the ability to modify the surface energy budget (Harding and Snyder 2012). This modification is primarily due to partitioning the incoming solar radiation toward latent heating in favor of sensible heating. For a 12-yr average, irrigation decreases summer surface air temperature by less than 1°C and increases surface humidity by 0.52 g kg−1 (Fig. 24-4; Chen et al. 2018), but the irrigation cooling effect is more pronounced and longer lasting for maize than for soybean. These differing temperature effects of irrigation are associated with significant reduction in the surface-sensible heat flux for maize, although the effect over soybean is negligible (Fig. 24-4). Both maize and soybean have increased latent heat fluxes after irrigation events.
As a first response to the increased sensible heating, surface temperatures are modified. Changes to temperatures in irrigated regions include decreased maximum temperatures, increased minimum temperatures, and increases in dewpoint temperatures due to the increased low-level moisture (Geerts 2002; Adegoke et al. 2003; Boucher et al. 2004; Kueppers et al. 2007; Lobell and Bonfils 2008; Cook et al. 2015). Recent literature suggests that irrigation has influenced temperature extremes and altered precipitation in irrigated areas. The literature also suggests that precipitation is often enhanced downwind of irrigated regions (Barnston and Schickedanz 1984; DeAngelis et al. 2010; Sen Roy et al. 2011; Harding and Snyder 2012; Alter et al. 2015b; Pei et al. 2016; Williams 2016). It is postulated that irrigation enhances precipitation downwind due to increased advection of evapotranspiration and changes in convective available potential energy (CAPE) (DeAngelis et al. 2010). The current literature is conclusive that irrigation significantly modifies the land surface, and it affects surface energy budgets, the water cycle, and climate (Cook et al. 2015). With this revelation, future work needs to focus on having an accurate representation of the impacts of irrigation in next-generation climate models for historical and future attribution studies (Alter et al. 2018).
f. Food–energy–water nexus
Beyond irrigation practices, there is an increased focus on agricultural activities related to the food–energy–water (FEW) nexus. A World Economic Forum report on global risk clearly articulates the complex and interdependent relationships among food supply, water availability, and energy production (World Economic Forum 2011). The same report projects significant increases in food demand (50%), water demand (30%), and energy demand (40%) by 2030. Shepherd et al. (2016) argue that much of the demand is driven by population changes and urbanization, which suggests that FEW interactions, agriculture, and urbanization will challenge scholars for years to come.
The FEW nexus is a conceptualization of the many ways in which these sectors are interconnected. Inputs of both energy and water are used in the production, processing, distribution, and consumption of food. In addition, energy production depends on the availability of water, while the provision and use of water require energy. These networks of interdependencies and feedbacks can be quite complicated. They also differ regionally as a function of differences in climate, economic activity, population, and land use. Weather and climate variability affect many of the activities along the suite of interconnected value chains that comprise the FEW nexus. Better anticipation of those impacts is likely to facilitate more efficient coordination of activities while enhancing the profitability of enterprises within these sectors.
An understanding of the nature of the FEW nexus, its regional heterogeneity, and its ongoing evolution in response to changing technologies, markets, and policies will be needed to best meet the applied meteorology needs of these interconnected sectors. The world’s energy sector is entering a period of rapid transformation, especially in the structure of the electric power industry in which distributed generation by wind, solar and other renewables will account for a growing share of total electricity output [DOE 2015; IEA 2014; also see Part II of this applied meteorology series within the monograph (Haupt et al. 2019b)].
In addition to a rapidly evolving electric power sector, changes also are ongoing in other components of the nation’s food, energy, and water systems. Globalization is playing an increasing role in food markets (Brown et al. 2017); although requiring additional energy costs for transportation, it helps to alleviate the impacts of droughts and floods on food security. At the same time, globalization increases the vulnerability of small-scale farmers to competition from distant producers and exposes poor consumers to food-price volatility unrelated to local conditions. Other changes will be driven by the fact that global climate change is already underway and is projected to have significant impacts on agricultural systems and water resources over the foreseeable future (IPCC 2014).
Most frameworks focused on FEW have ignored hydroclimate implications and interactions in favor of land use, greenhouse gas emissions, resource management, and other factors (Villamayor-Tomas et al. 2015) even though hydroclimatic factors are implicitly integral to each node of the FEW nexus. Organizations like the World Bank maintain databases of key indicators for countries related to climate, energy, and agriculture. They often included metrics like carbon dioxide emissions, cereal yield, improved water source, and percent of urban population with access. Shepherd et al. (2016), for example, have explored various precipitation-per-person metrics (Fig. 24-5). Specific objectives of developing such metrics are to expose agricultural areas under cultivation per capita based on water availability, nourishment needs, and energy constraints, and an assessment of agriculture system vulnerability to hydroclimatic variability and extremes.
Regional differences in weather and climate vulnerabilities are especially striking when considering the food sector’s connections to energy and water. Irrigation plays a significant role is supporting agricultural output in the arid and semiarid western region of the United States, where it dwarfs all other water uses. Irrigation also is important in other major centers of crop production, especially in parts of the southeastern and south-central states of the United States where supplemental irrigation allows more reliable and profitable operations than would be possible with reliance on rainfall alone (Fig. 24-6).
In California, which leads the United States in terms of the value of agricultural output and in irrigation water use, the USGS reports that irrigators withdrew 25.8 million acre-feet (1 acre-foot = 1233.48 m3) of water in 2010 (prior to several recent years of extreme drought conditions), while all of public supply and self-supplied industrial withdrawals amounted to approximately 7.5 million acre-feet. The irrigation share in total water withdrawals is higher in several other western states: 89% in Colorado, 81% in Idaho, and 94% in Montana (Maupin et al. 2014). The region’s heavy reliance on mountain snowpacks to regulate seasonal water availability creates vulnerabilities to drought periods as well as to climate change. As conditions warm, earlier runoff and related reductions in late summer streamflows are likely to be especially disruptive for irrigated agriculture in those states.
With regard to other aspects of the FEW nexus in the United States, there are additional striking differences between the western and eastern portions of the country in the ways that water is used by electric power producers and in the overall sectoral composition of water use (Fig. 24-7). In contrast to the eastern states where once-through cooling for thermoelectric power plants dominates water diversions, water scarcity has forced western electric utilities to adopt technologies that do not require large volumes of water such as cooling ponds, recirculating systems, and even dry cooling towers (Averyt et al. 2011; Cooley et al. 2011; Fisher and Ackerman 2011; Kenney and Wilkinson 2011). In addition, much of the West’s hydropower generation occurs at run-of-the-river facilities, or at dams with limited storage capacity, and thus is also sensitive to droughts and future changes in seasonal flow patterns (Gleick 2015).
Climate impacts on the electric energy sector occur on the supply side, for example through the effects of warmer water on thermoelectric plant cooling and changes in hydropower, wind, and solar production. On the demand side, impacts may include increased energy use for water provision and treatment. As a case study, in 2014, drought conditions led California’s irrigators who were facing reduced surface water supplies to increase their groundwater pumping by approximately 5.1 million acre-feet, incurring an additional $454 million in energy costs for pumping (Howitt et al. 2014). That surge in energy use by the agricultural sector came at a time of reduced in-state hydropower generation and a consequent $1.4 billion increase in ratepayer electrical costs over the course of three consecutive dry years (Gleick 2015). That drought experience demonstrates that the coupling between the cost of electricity and overall electricity use is complicated and revolves around different ways in which electricity is used for water supply. Despite the increased groundwater pumping and probable increased demand for air conditioning during that record-hot summer, statewide retail sales of electricity in 2014 were slightly lower than during the previous year (EIA 2015). Possible explanations for the drop include reduced pumping for long-distance conveyance of water from the Sacramento–San Joaquin Delta to the southern part of the state as a consequence of drought-related environmental restrictions on that pumping. In addition, conservation incentives and increased generation by distributed solar systems may have played a role.
The long-term consequences of California’s recurring severe droughts will include greater pumping lifts as increased reliance on groundwater sources contributes to declining aquifer levels. The substitution of groundwater for unavailable surface water supplies has done much to avert economic hardship and long-term damage to orchards and other perennial crops, but until passage of the state’s 2014 Sustainable Groundwater Management Act (AB 1739, SB 1168, and SB 1319), this activity was uncoordinated and largely unconstrained.
As climate change progresses, a new generation of applications is emerging that move beyond physical connections between agriculture and climate. Chen et al. (2016) developed an empirical framework for estimating agricultural yields based on weather. Burke and Emerick (2016) investigated adaptation practices in U.S. agricultural activities with the goal of understanding future risks to outcomes. Altieri and Nichols (2017) explore how traditional agroecological strategies (biodiversification, soil management, and water harvesting) might be used in management and design of agroecosystems. The goal is to improve both resiliency to risk and increase productivity. Such approaches epitomize how applied climatology is evolving to address twenty-first-century challenges.
Given the complicated nature of the interlinkages among the FEW sectors, and their sensitivity to climate variability, it is important to develop a clear understanding of nature and dynamics of the FEW nexus. Multidisciplinary and multistakeholder collaboration will be needed to foster that understanding.
3. Applications in space weather
We live in the atmosphere of our star, the sun. “Space weather” is the term used to describe the relentless barrage of particles that bathe Earth and other planetary bodies of the solar system that originate in the steady evolution, and catastrophic breakdown, of magnetic structures on the sun. In the increasingly technological society in which we live, the impacts of the sun are being felt more and more by members of the public—even if the vast majority do not know it!
Space weather has a range of impacts on our atmosphere that manifest themselves across the scale from raw natural beauty (through aurorae; e.g., Chapman 1957) to the destruction of critical public infrastructure (e.g., Boteler 2001). The day-to-day tick tick tick of the sun on our atmosphere costs the U.S. government and private sectors upward of $10 billion per year (National Research Council 2009), and it is one of the only “natural disasters” that the reinsurance industry will not cover (e.g., Schrijver et al. 2014). While our planet’s magnetic field is critical as a shield in protecting us from the majority of solar variability, the characterization, monitoring, and modeling of the sun’s magnetic field are the critical drivers of the sun–Earth system and also pose the most significant challenge to progress.
Early investigations of solar magnetism and extreme flavors of solar activity relied heavily on correlated impacts on our atmosphere (e.g., Birkeland 1914). Indeed, many investigations into what would eventually be dubbed space weather were rooted in the practical aspects of military need during World War II. Both the Axis and Allied powers deployed observational techniques that were very advanced at the time to provide forewarning of ionospheric distortions that would significantly impact battlefield tactics through local and global radio communications (see, e.g., Hufbauer 1991; de Jager 2002): empirical connections of the sun and Earth were the norm. In those days the primary means of identifying solar “storms” was the detection of events on the sun’s east limb using a device called a coronagraph, a device invented by French astrophysicist Bernard Lyot (Lyot 1939) to create artificial total eclipses by blocking the light from the disk of the sun. A coronagraph reveals the Sun’s corona—a cloud of gas surrounding the sun that is one million times fainter than the sun’s disk—and chromospheric protuberances called “prominences.”
Following World War II, our knowledge of the sun–Earth system advanced with the dawn of the rocket, space, and satellite age, much as terrestrial meteorology did, including V2 rocket-borne spectroscopic measurement of the sun’s corona and its subsequent identification as being consistent with the presence of a million-kelvin cloud of highly charged particles (Grotrian 1939; Edlen 1945), the prediction of the “solar wind” (Parker 1958), and its eventual detection by the Russian Luna 1 satellite and subsequent Mariner mission measurements (Neugebauer and Snyder 1962). The observational environment outside of the turbulence and (photon) absorption of our atmosphere provided by the Orbiting Solar Observatory (OSO) fleet and then Skylab identified a new relevant feature in the space weather lexicon: the “coronal mass ejection” (CME; e.g., Hansen et al. 1971; Tousey and Koomen 1972).
We know now that CMEs are very often intimately related to flares and prominence eruptions. They flow into a solar system that has plasma flows dictated by the sun’s magnetic field, the solar wind structure, and the energization of the corona. Characterizing, and predicting that relentlessly evolving environment is the essence of space weather forecasting (SWx).
The challenges of contemporary SWx can be considered to be of two flavors:
Once an eruptive event has occurred (noting that it takes 8 min for the changes from the event to be seen at Earth because of the 93 million mi (~150 million km) of light travel time) we are in a race against time to estimate the path of the disturbance through the solar system, including the determination of the disturbance’s intersection with the orbit of Earth, estimation of the arrival time at Earth; estimation of the magnitude of the interplanetary shock (CMEs can travel faster than the background medium); and estimation of the magnetic polarization of the disturbance–since an antiparallel magnetic field in the disturbance will couple directly into Earth’s protective magnetosphere. This sounds a lot like hurricane forecasting except with a couple of critical differences—we really do not know much about the mechanisms driving and populating the solar wind (the background state on which the disturbance travels) and we have no observational baseline to estimate the disturbance polarization, other than a couple of sentinel spacecraft a few tens of minutes upstream of the sun–Earth line at the Lagrange “L1” point of gravitational balance between the sun and Earth. Since numerical models form the primary forecasting tool, there is wide acknowledgment of fundamental limitations in predictive skill on an event-by-event basis. This “after the horse has bolted” approach is the current paradigm of SWx.
The alternate, predictive, approach to SWx doesn’t really exist! To the vast majority of the SWx community, in addition to the broader solar research community, solar flares and CMEs are as “intrinsically unpredictable” as earthquakes. This paradigm is neither acceptable nor true, as we’ll discuss below. The future of human exploration of the solar system and the protection of critical infrastructure in space and in the troposphere requires the development of considerable predictive skill in SWx, both for solar events and terrestrial impacts. (However, such development should not be an “unfunded mandate.”)
As mentioned earlier, significant predictive skill for tropospheric weather was accelerated by the dawn of the satellite age through our ability to study the entire atmosphere from the vantage point of low Earth orbit (e.g., Wexler 1962; Lorenz 1973). The identification and characterization of global-scale drivers of local-scale weather and developing predictability for the former led to more success in forecasting the latter. SWx research is at the same status as terrestrial meteorology was at the dawn of the space age (70 years ago) because the SWx enterprise is limited by the single “local time” perspective. Our observational baseline is focused only on the sun–Earth line, and our knowledge of the global solar atmosphere from where the bulk of our issues stem is, to be frank, naïve.
Solar magnetism is the root cause of space weather. In fact, solar magnetism drives the bulk of our star’s variability across scales and so characterizing that evolving magnetism on time scales from seconds to millennia is sometimes cast in the similar “weather” and “climate” paradigms as our investigations of Earth’s atmosphere. The vast scale of the sun and the massive sun–Earth distance make the SWx problem, or those relating to the root of the space weather problem at the sun, a profound remote sensing challenge—a challenge in which we capture photons and particles 93 million miles away to infer the physics of the fundamental processes that propelled them to us (e.g., Schwenn 2006; Schrijver et al. 2015).
Of most critical importance to the SWx enterprise is the characterization of the sun’s magnetism throughout the solar atmosphere (e.g., del Toro Iniesta and Ruiz Cobo 2016). By exploiting quantum mechanical effects and measuring polarized radiation we can get a bearing on the sun’s vector magnetic field as it becomes visible after building up in the sun’s opaque interior, via a poorly understood process called the “solar dynamo” (e.g., Charbonneau 2010; Hathaway 2015).
Solar magnetism displays a host of variational time scales of which the enigmatic 11-yr sunspot cycle is most prominent. Sunspots are a manifestation of intense magnetic field concentrations and are the hosts to flares, CMEs, and the most dynamic of prominences—in other words, the majority of the most dangerous space weather events. The other, more stealthy, and mysterious constituent of the space weather zoo is also rooted in varying magnetism: the coronal hole. Coronal holes were discovered once systematic, or synoptic, coronal observations of the solar disk were visible from orbit (Krieger et al. 1971) where they appeared, literally, as dark “holes” in the bright corona. It was subsequently discovered that coronal holes were the outward extensions of spatially extended regions of the unipolar magnetic field (Timothy et al. 1975) and the source of the “fast solar wind” (Krieger et al. 1973).
The solar wind has two primary states, “slow” (200–500 km s−1) and “fast” (>500 km s−1). The former is really a continuum of slow states, where differences in slow wind parcels are most easily quantified through differences of plasma composition in the parcels (e.g., Hundhausen 1970) that result from the different magnetically confined regions of the sun’s corona from which that plasma originates (e.g., Harvey and Sheeley 1979). Slow wind can arise from quiescent and active regions on the sun. The physical origins of the slow solar wind, its gradual acceleration, and its compositional contrast pose challenges to our community. The simpler state, in principle, is the fast wind, coming from the relatively simple coronal hole environment, but the rapid acceleration and starkly different compositional signature similarly pose physical challenges. A simple delineation between slow and fast wind, beyond their measured velocities is that the latter is “cooler” with a compositional signature consistent with a plasma of < 1 MK, and the former with a range of consistent root plasma temperatures that can greatly exceed 1 MK (e.g., Zurbuchen et al. 2002). These two states vary and mix in the three-dimensional magnetic system that is the heliosphere on pathways that are themselves set by the magnetic field configurations at the center of the system.
Establishing the “solar wind roadmap,” the state of the background plasma environment into which a flare, CME, or prominence is launched poses as much of a challenge to our community as the disturbances themselves. In a sense though, it is more critical, because any scientist knows about the impact of poor initial conditions on a mathematical or numerical problem. Can you imagine the chances of successfully forecasting the characteristics of a hurricane when you have no more than 50% accuracy on any of the background environmental variables? That wouldn’t be acceptable, would it? There are many “decision points” in the contemporary SWx challenge. Operational practice leans heavily on past experience that results from the analysis of high-heritage observational tools.
We must rise to these challenges! The SWx capability required to protect future human explorers in the solar system, in addition to critical ground- and space-based infrastructure, is being conceived through the recent National Space Weather Strategy (https://www.sworm.gov/). This strategy is devised to reduce and/or eliminate the shortcomings of the physical challenges and forecasting decision points. Observational tools to reduce risk with regard to the background solar wind, CME directionality, CME and prominence magnetic polarization, and so on are all critically wedded to information technology, data assimilation, and the array of numerical modeling techniques that have been extensively developed over past decades in the solar–terrestrial physics community. A truly critical need for future space weather understanding (and increased forecast skill) is the full characterization of the sun’s global magnetic field distribution—we must begin to study the sun’s atmosphere as a weather system, exploiting the observational tools and methods developed by the meteorological community. Early investigations of (truly) global solar phenomena point to strong analogs between our atmosphere and the sun’s (McIntosh et al. 2017) and offer insight into the gross predictability of solar activity that belongs to persistent longitudinal patterns in solar magnetism.
On the terrestrial side of SWx, observational platforms are being deployed to explore the magnetosphere, radiation belts, and now the ionosphere with the Global-Scale Observations of the Limb and Disk (GOLD; Eastes et al. 2017) and Ionospheric Connection Explorer (ICON; Immel et al. 2018) missions. Those missions, their data, and the numerical models derived from them are going to provide critical insight into the “top-down” (from the sun) and “bottom-up” (from the troposphere) impacts on the ionospheric interface between magnetically and thermodynamically controlled environments. As is often the case, some of the most interesting physical phenomena and challenging measurements to characterize occur at boundaries of physical domains. Conquering the physics of the ionosphere will be necessary to improve forecast skill of that region beyond a few hours. Application of high-skill, long-duration ionospheric forecasts has a reach beyond the academic environment, into commercial and the military sectors where warfighters critically depend on their field communication devices.
The need for high-skill and accurate space weather forecasts of the coupled sun–Earth system will not diminish. Our societal dependence on technology continues to increase; it will drive a need to understand our star and its persistent connection to our planet like never before.
4. Applications in wildland fire management
Wildland fires are a component of the natural environment that is essential to maintaining healthy ecosystems, but they are also often destructive, affecting natural resources, threatening human life and property, reducing air quality, leading to soil erosion and flooding, and potentially affecting weather and climate. The earliest evidence of wildland fire based on plant fossils preserved as charcoal can be dated to the Silurian period more than 400 million years ago. Throughout geological history, wildfire frequency and intensity were related to the level of oxygen in the atmosphere (Watson et al. 1978) and the availability of fuel sources. Applications of meteorology in wildland fire management can be traced back to the development of wildland fire management. In the United States the event that is often considered as a turning point in wildland fire management is the Great Fire of 1910 also called the “Big Blowup,” a wildfire in the western states in the summer of 1910. The resulting burn area spread over parts of three states: Washington State, Idaho, and Montana, covering an area of 12 100 km2, similar to the size of the state of Connecticut. The passage of a cold front on 20 August with hurricane-strength winds resulted in a large number of smaller wildland fires aggregating into two large ones. Over two days in August 1910 the firestorm killed 87 people, a large number of whom were firefighters. The devastating effect of this and other wildland fires resulted in the U.S. Forest Service policy of suppressing all wildland fires (Pyne 1982). The Forest Service officially abandoned this policy in 1978. Initially better wildland fire suppression and later management required a better understanding of wildland fire behavior and, consequently, an improved understanding of interactions and feedbacks between wildland fire and weather and climate. Advances in climatology, meteorology, and weather forecasting over the last 100 years have resulted in corresponding advancements in the prediction of wildfire likelihood and spread. Today, development of coupled wildland fire and atmospheric environment models enables predicting extreme fire behavior resulting in rapid rates of spread. Accurate predictions could provide essential information for effective wildland fire management.
Fires in general, including wildland fires, require three components: a heat source, fuel, and oxygen. These three essential components are all intrinsically connected to environmental conditions, and therefore, to the atmosphere. In wildland fires the heat source required to ignite fuels is often provided by lightning. Regional climate conditions, in addition to terrain and soil type, determine which fuels are dominant in a specific area. Weather and climate affect fuel moisture content. Finally, burning, or combustion, is an exothermic chemical reaction between a fuel and an oxidant. In wildland fires the oxidant is atmospheric oxygen. While oxygen is the second largest constituent of Earth’s atmosphere, accounting for almost 21% of its volume, atmospheric circulations, through turbulent mixing, are essential for providing a continuous supply of oxygen for combustion processes in wildland fires. Atmospheric conditions including winds, relative humidity, precipitation, cloud cover, solar irradiance, and so on affect the spread of wildland fires. In turn, wildland fires directly affect atmospheric conditions through modification of the surface sensible and latent heat flux, moisture flux, aerosol loading, and indirectly through modification of convective updrafts, resulting in modified wind patterns and smoke dispersion that modifies radiative transfer. Under favorable conditions, large wildland fires can result in formation of pyrocumulus clouds. Depending on the available moisture, pyrocumulus clouds can evolve into thunderclouds, or pyrocumulonimbus, which can produce rain, lightning, and potentially strong downdrafts, sometimes called “collapsing columns,” all of which can affect wildland fire evolution. Wildland fires and atmospheric conditions therefore form a complex coupled nonlinear dynamical system with feedbacks that control wildland fire spread.
The importance of weather phenomena for wildland fire spread, as well as the effect of large wildland fires on local weather, has been observed and documented before it was possible to measure these effects and carry out detailed quantitative analyses. Some of the first descriptions, published in the United States, of the effect of wildland fires on weather phenomena coincide with the year when the American Meteorological Society was formed, 1919. These studies focused on observations of convective clouds as a consequence of large wildland fires in California (Carpenter 1919) and Hawaii (Reichelt 1919), as well as a number of rain events from cumulus clouds over wildland fires reported during the mid-1800s (Espy 1919).
Although today the majority of wildland fires may be ignited by humans, a significant number of wildland fires are still ignited by lightning. In the western United States, in particular, a larger number of wildland fires are ignited by dry lightning (Abatzoglou et al. 2016; also cf. EcoWest 2013). Dry lightning-ignited wildland fires in remote, not easily accessible, and sparsely populated areas often result in the largest burned areas. Dry lightning is cloud-to-ground lightning that is not accompanied by rainfall. The likelihood of dry lightning occurrence depends on the stability aloft and lower-level atmospheric moisture content (Rorig and Ferguson 1999). Spatial representation of lightning likelihood after the passage of a storm can be an important aid for wildland fire managers. Wildland fire ignition potential by lightning depends on environmental conditions, including live and dead fuel moisture content, and weather conditions (i.e., wind speed, temperature, and humidity). While dry lightning often ignites wildland fires, recent studies indicate that climate conditions are the dominant controlling factor of variability in the burned area throughout the western United States. Forecasting lightning and lightning ignition potential represents one of the greatest challenges for modeling and managing wildland fires due to the inherent spatial and temporal stochasticity and other associated uncertainties.
The complexity of the coupled wildland fire–atmosphere system represents a significant challenge to the development of effective applications for wildland fire management. An effective decision-support system combines observations and observation-derived data products with predictive models. A decision-support system for wildland fire management necessarily integrates a wide range of disparate data sources including data about lightning strikes, fuel types, and fuel moisture content, as well as climate and weather conditions (e.g., Wildland Fire Decision Support System 2018; Calkin et al. 2011; Wildland Fire Assessment System 2018; Jolly and Freeborn 2017).
The U.S. Forest Service’s Wildland Fire Assessment System (WFAS; WFAS 2018; https://www.wfas.net) provides a range of information related to fire potential and danger, including Fire Danger Rating, Haines index (Haines 1988), and dry lightning maps. Fire Danger Rating is based on preceding weather conditions, fuel type, and fuel moisture content for dead and live fuels. Using data from the National Digital Forecast Database, WFAS produces fire danger forecasts. Fuel moisture content for both dead and live fuels depends on weather and climate conditions.
The Haines index characterizes the lower atmosphere stability and dryness specifically for fire weather in order to quantify the likelihood of wildfire growth. The Haines index is computed using morning atmospheric soundings provided by the Universal Rawinsonde Observation Program (RAOB; http://www.raob.com/features.php). Dry lightning maps are produced by combining daily estimates of rainfall produced by the National Weather Service Advanced Hydrologic Prediction Service (AHPS) with lightning density grids derived from daily cloud-to-ground lightning strike data (Cummins et al. 1998). The dry lightning is calculated using a lightning fuel-type grid utilizing maps of land cover type (Schmidt et al. 2002). Last, the potential lightning ignition is calculated by combining lightning strike data and the lightning efficiency map. The lightning efficiency depends on the ratio of positive and negative discharges since positive discharges result in a higher likelihood of ignition.
Moisture content in wildland fuels represents an important parameter controlling the ignition and spread of wildland fires. Accurate estimates of dead and live fuel moisture content are therefore essential for accurate assessment of wildland fire risk and spread. Dead fuels are classified by the time lag, which depends on the diameter of the fuel. The time lag approximates the time it takes for the fuel to reach two-thirds of its way to equilibrium with the environment. Dead fuels are classified as 1-, 10-, 100-, and 1000-h fuels. Dead fuel moisture depends only on environmental conditions. In addition to direct estimates of fuel moisture content, the growing season index (GSI), NDVI, Keetch–Byram index (Keetch and Byram 1968), and Palmer index (Palmer 1965) can be used to assess the state of wildland fuels. GSI is used to quantify physiological limits to photosynthesis in live fuels. GSI depends on minimum temperature, vapor pressure deficit, and the duration of daylight. Biochemical processes in plants are sensitive to low temperatures: in particular, water uptake by roots is affected by soil temperatures. The vapor pressure deficit of the atmosphere is used as a proxy for the soil water balance, which is difficult to measure. The daylight corresponds to the period when plant photosynthesis takes place and is related to the seasonal cycle. NDVI derived from Advanced Very High Resolution Radiometer satellite data is used to determine vegetation greenness. The Keetch–Byram index is a drought index used to assess fire potential. The Palmer index, or the drought severity index, is based on the water balance equation taking into account available water content (AWC) in the soil, precipitation, temperature, and the concept of supply and demand. As a standardized measure of drought, this index enables comparison between different times of year and different locations.
While the assessment of fire danger is largely based on a wide range of environmental observations, effective wildland fire management depends on accurate wildland fire spread prediction. Numerical weather prediction (NWP) was one of the early applications of computers and numerical methods for solution of partial differential equations [see the chapter in this monograph series by Benjamin et al. (2019)]. The increased computational power led to higher-resolution NWP and development of the first limited-area models in the early 1970s (e.g., the Mesoscale Model; Anthes and Warner 1974, 1978). These developments coincided with the first attempts to numerically simulate wildland fire behavior (Sanderlin and Sunderson 1975; Sanderlin and Van Gelder 1977). Simulation of wildland fire behavior required development of mathematical models of wildland fire spread. One such model that is still widely used today was developed by Rothermel (1972). Rothermel combined theoretical considerations and empirical observations to derive a wildland fire rate of spread model linking fuel properties and environmental conditions. According to the Rothermel model, in addition to the packing ratio, bulk density of the fuel bed, heat of preignition, fuel loading, the fuel’s mineral content, and the fuel’s heat content, which depend on the fuel type, the rate of spread also depends on the wind speed, terrain slope, and fuel moisture content. The Rothermel fire spread model represents a core component of a number of wildland fire spread simulation models. These wildland fire spread simulation models combine a fire spread model with information about the environmental conditions to provide estimates of wildland fire perimeter growth, flame length, crowning, potential for spotting, heat release, and so on. Effective wildland fire spread simulation models lead to better understanding and prediction of fire behavior. Such models can be used to mitigate risk of wildland fires, plan fire suppression activities, and aid in training firefighters. The effectiveness of wildland fire spread models depends on the fidelity of representing the physical processes that govern fire behavior, as well as fuel types and fuel moisture content. Accurate, high spatial resolution characterization of fuel types is therefore critical for accurate wildland fire spread prediction. Fuels characteristic of the United States are categorized by the fuel models of Anderson (1982) and Scott and Burgan (2005). Scott and Burgan expanded the original 13 fuel types of the Anderson model to 40 fuel types. High-resolution, 30-m gridcell-size fuel maps are available for both fuel models. Keane (2015) presented a comprehensive summary of wildland fuel types and concepts and related applications including fuel sampling, mapping, and treatments.
Sullivan (2009a,b,c) presented an extensive review of models for simulation of wildland fire spread developed since 1990. Sullivan classifies models based on their complexity and theoretical or empirical underpinnings as: physical and quasi-physical, empirical and quasi-empirical, and simulation and mathematical analog models. An alternative classification of wildland fire spread simulation models divides them into uncoupled and coupled models. Uncoupled models do not include a dynamic representation of atmospheric conditions, but rather rely on local measurements, weather forecasts, or offline atmospheric simulations for wind speed and wind direction, as well as temperature, moisture, and other environmental conditions needed to predict fire behavior. Uncoupled models cannot account for the effect of heat released by combustion processes on the atmospheric flow conditions at the flaming front. The heat released by combustion induces convective circulations that, depending on the rate of heat release, can result in significant modification of local atmospheric flows and potentially enhance fire spread. Furthermore, convective plumes raise firebrands that can then be carried long distances downwind from the flaming front, potentially resulting in spotting. In addition to plume rise, spotting efficiency depends on a number of parameters and is essentially a stochastic process (Albini 1983; Martin and Hillen 2016). Coupled models that simultaneously resolve atmospheric motions and model combustion processes account for the feedbacks between the flaming front and atmospheric conditions, and therefore can potentially represent and predict fire behavior more accurately. These models are substantially more complex than uncoupled models and therefore require significant computational resources. However, coupled models are required to capture extreme fire behavior such as fire whirls and tall convection columns that can result in rapid rates of fire spread. Coupled models can be further divided into those that rely on fire spread models such as the Rothermel (1972) model to account for the combustion effects and those that resolve some elements of combustion processes. The latter require much higher resolution with grid sizes of a few meters or less. Because of computational requirements, such models are most often used as research tools to study wildland fire spread under idealized conditions. Two representatives of this group of models are FIRETEC (Linn 1997; Linn et al. 2002) and the Wildland–Urban Interface Fire Dynamics Simulator (WFDS; Mell et al. 2007) models. The complex combustion reactions of a wildland fire are represented in FIRETEC using a simplified set of reactions including pyrolysis of vegetative fuels, solid–gas reactions, and gas–gas reactions (Linn 1997). This model can be further simplified by reducing the combustion process to a single solid–gas reaction (Linn et al. 2002). While the FIRETEC model requires grid cell sizes on the order of a meter or less, the WFDS model is commonly used for slightly coarser-resolution simulations with grid cell sizes of a few meters. The WFDS model is an extension of the Fire Dynamics Simulator developed by the Building and Fire Research Laboratory at the National Institute of Standards and Technology (McGrattan et al. 2018) that accounts for wildland fuel combustion. In the WFDS, detailed chemical reactions are not represented but combustion is modeled assuming that the time scale of the chemical reactions is significantly shorter than the time scale of mixing, so that the combustion is a result of stoichiometric mixing of the fuel gas and oxygen independent of temperature. The spatial resolution requirement of coupled models that include combustion models implicitly limits the size of fires that can be simulated. Nevertheless, when combined with observations, validated high-resolution models represent an indispensable tool for developing a better understanding of processes governing wildland fire behavior and development of better fire spread parameterizations for operational wildland fire spread simulation models. While these models are computationally intensive they can be used for planning controlled burns and other fuel management strategies.
Coupled models that can be used as a component of decision-support systems for wildland fire spread prediction usually rely on parameterizations of wildland fire spread based on the Rothermel or similar models. One such model is the Coupled Atmosphere Wildland Fire Environment (CAWFE) model (Clark et al. 1996). The CAWFE model couples a Clark–Hall cloud-scale model (Clark and Hall 1991) with the Rothermel (1972) rate-of-fire-spread model and fuel burn rates determined experimentally by Albini (1976). Following the developments by Clark et al. (1996), more recently, a rate-of-fire-spread model implemented in the CAWFE model was integrated into the WRF Model (Skamarock and Klemp 2008). The WRF Model is a limited-area model widely used for both operational weather forecasting as well as research studies. It is also used as a platform for testing and evaluating improvements to parameterizations of various atmospheric processes. The WRF Model can be configured as a coupled atmosphere wildland fire model, known as WRF-Fire (Mandel et al. 2009; Coen et al. 2013; e.g., Fig. 24-8).
While it has been recognized that an effective decision-support system for wildland fire prediction needs to include a capability that couples a weather forecast with fire spread prediction, such a system has not yet been implemented (Sun et al. 2009). The complexity of the weather–wildland fire system, required high-resolution data, associated uncertainties, and significant computational requirement until recently precluded development of an effective operational coupled system. The development of databases needed by coupled operational models, including frequently updated high-resolution fuel and fuel moisture content maps and fire perimeters, as well as high-resolution datasets from prescribed burns (e.g., RxCADRE; Clements et al. 2016) that can be used for their assessment are prerequisites for the effective integration of coupled models into decision-support systems. Advances in computational platforms, including high-performance computing and cloud computing, will enable transfer of models that are currently used as research tools to operations.
At present, uncoupled wildland fire spread simulation models represent the core prediction capability for decision-support systems for wildland fires. Some of the more widely used uncoupled wildland fire models are the Fire Area Simulator (FARSITE; Finney 1998) and BehavePlus (and its predecessor “BEHAVE”; Andrews 1986, 2014) in the United States, Prometheus designed for Canadian fuel complexes (Tymstra et al. 2009), Amicus in Australia (Plucinski et al. 2017), “SYPYDA” for Mediterranean pine forests (Mitsopoulos et al. 2016), and so on. Uncoupled models are able to provide predictions of wildland fire behavior in real time with limited computational resources. They utilize weather data either from weather forecasts provided by national centers (e.g., NCEP) or local observations, or rely on wind fields generated by downscaling weather forecasts using diagnostic, mass consistent, wind models such as WindNinja (Forthofer et al. 2014a,b). The capabilities of the WindNinja model have been recently extended to include an option for integration of momentum conservation equation (WindNinja 2018).
Both coupled and uncoupled models that rely on semiempirical models for rate of fire spread require a flaming-front-tracking algorithm. FARSITE and Prometheus use the Huygens principle of wave propagation for that purpose (Huygens 1690), and the CAWFE model includes a Lagrangian particle-tracer algorithm (Clark et al. 2004). The coupled atmosphere–wildland fire model based on the Meso-NH mesoscale model (Filippi et al. 2009) uses the method of markers. Several models, including WRF-Fire (Mandel et al. 2009), use the level-set technique for front tracking. The level-set technique is based on firm mathematical foundations and is widely used in computational physics for tracking moving boundaries (Osher and Sethian 1988). Bova et al. (2016) found minor differences between the marker method and level-set methods implemented in the same code, and Muñoz-Esparza et al. (2018) demonstrated that errors in fire spread can be reduced significantly by using a higher-order scheme for level-set advection and implementing a level-set reinitialization algorithm. In addition to front-tracking algorithms, wildland fire spread simulation models include crown fire spread models (e.g., Rothermel 1991) and fire-spotting models (e.g., Albini 1983).
Developing an effective decision-support system for wildland fire represents a significant challenge. In addition to the large amount of frequently updated high-resolution data characterizing environmental and fuel conditions, wildland fire spread prediction requires high-resolution simulations. For the largest, most destructive wildland fires that have the potential to raise significant fire phenomena, coupled atmosphere–wildland fire models are required. While significant advances have been made in understanding and modeling wildland fire behavior, in addition to computational limitations, there are still gaps in our understanding of the underlying processes. Since Finney et al. (2013) identified the need for a comprehensive theory of wildland fire spread, advances have been made in elucidating the role of buoyant flame dynamics in wildfire spread (Finney et al. 2015). However, a comprehensive theory of wildland fire behavior can only be achieved by studying intricate nonlinear feedbacks that characterize coupled atmosphere–wildland fire environments that lead to the observed wildland fire behaviors. An effective decision-support system for wildfire management can be built on firm foundations by recognizing and quantifying the uncertainties inherent in this complex coupled system. To achieve this goal, a concerted effort is needed to collect high-resolution and high-quality data from both wildfires and prescribed burns.
5. Applications of AI in applied meteorology
Humans have always noticed patterns in the weather and sought to understand them. This understanding advanced in parallel directions. One direction was categorizing weather events and looking for patterns that often were repeatable. An example of this approach is the old saying “Red sky at night, sailor’s delight.” People did not feel that they needed to understand the process to be able to use the knowledge to generalize the likelihood of good versus bad weather for the next day (Haupt et al. 2009c). This approach is the basis for early forms of artificial intelligence: break things into generalizable rules based on observations. This heuristic method is in stark contrast to the reductionist approach, where scientists sought to understand the processes by breaking them down into small parts. An example is using a control volume approach to analyze the various forces on a parcel of air; this method is typically used to derive the dynamical equations of motion that describe the advection of weather patterns. We were not really able to integrate those equations successfully until the advent of the digital computer. After the initial successes of integrating the Navier–Stokes equations by Charney et al. (1950), the dynamical/physical approach based on the reductionist theories advanced alongside the growth of computational power.
a. The rise of AI
Advances in computing not only spurred advances in the dynamical/physical approach, but also enabled modern artificial intelligence (AI) to develop. In 1950, Alan Turing published a paper exploring whether machines could be trained to think and proposed a test to determine whether a suspicious interrogator could distinguish answers to questions from a machine versus a human (Smith et al. 2006). Simultaneously, Claude Shannon was contemplating ways to teach a computer to play chess (AAAI 2017). In 1956, John McCarthy convened a conference at Dartmouth University that brought together the top researchers and coined the name “artificial intelligence” for the push to advance the concept of machines emulating human thought (Smith et al. 2006; AAAI 2017). Although less progress was made at that meeting than originally hoped, it prompted a few decades of defense funding to propel the field forward, particularly in areas of machine translation of languages. Much of this work was in the more heuristic field of expert systems, which codifies and blends the knowledge of experts (Poole and Mackworth 2017). Unfortunately, early hype led to disappointments for the sponsors: when apparent successes did not lead to the expected usable products, funding was discontinued. Thus ensued the “AI winter” beginning in the 1980s in the United States, leading to two decades of reluctance to fund work in AI. A host of new names for the field emerged to mask the real nature of the research, including machine learning, informatics, pattern recognition, knowledge-based systems, and more (Smith et al. 2006). These nomenclatures attempted to distinguish the more nascent methods that are based more on data from the earlier, primarily heuristic approaches. Industry, however, continued the work and with IBM’s success with Deep Blue beating chess champion Gary Kasparov in 1997, interest in AI resumed (Smith et al. 2006) and U.S. funding agencies began to regain interest in the field.
During the boom in the more heuristic methods, environmental scientists began codifying expert systems as a way to combine information from multiple sources and make logical inferences. The March 1987 special issue of Atmospheric and Oceanic Technology (volume 4, number 1; http://journals.ametsoc.org/toc/atot/4/1) gives a sampling of the types of work being done at that time. It includes examples of convective storm forecasting (Elio et al. 1987), recognizing low-level wind shear from radar observations (Campbell and Olson 1987), and pattern recognition as applied to forecasting (McArthur et al. 1987).
The environmental sciences possess a host of interesting problems amenable to advancement by intelligent techniques. Those advances were occurring in parallel to the advent of both NWP and AI. They began as advances using increasingly complex applications of statistics. NWP forecasts could be improved by using multivariate linear regression on historical data, producing model output statistics (MOS; Glahn and Lowry 1972) that could apply those “learned” corrections to the current forecast. More could be discerned about atmospheric modes of oscillation by doing multidimensional correlation analysis (Schlatter et al. 1976) and principal component analysis, which began to be dubbed empirical orthogonal functions (Lorenz 1956; Wilks 2005; Hasselmann 1988). Those correlation techniques could also be applied in time to train predictive models using canonical correlation analysis and modes of oscillation (von Storch and Navarra 1995). Researchers began building models using these eigenmodes as basis functions, both in terms of dynamical model decomposition (Selten 1997) and in terms of applying Markov process theory to build stochastic forecast systems, and using the results to identify the time-dependent principal oscillation patterns (Hasselmann 1988; Penland 1989; Penland and Ghil 1993; von Storch et al. 1995; Branstator and Haupt 1998). Such models were often shown to predict as well as physical models (Penland and Magorian 1993; Penland and Matrosova 1998) or to better respond to imposed forcing (Branstator and Haupt 1998).
Some of the blending of statistical methods described above began to invoke the philosophy of machine learning. For instance, when making forecasts for a specific location, human forecasters often study the output of various models and use their experience and intuition to blend the information and mentally weight each model depending on the current weather situation. From experience they know that when a front is encroaching on the Colorado Front Range, model A may get the timing better than model B, but model B may better predict the resulting precipitation. They mentally correct the model output. In the mid-1990s, companies such as the Weather Channel decided to scale up their operations to international, which meant that the number of locations continually requiring forecasts would exceed the capability of human forecasters. Thus, they initiated a collaboration with the National Center for Atmospheric Research (NCAR) to design, test, and deploy a computerized system to accomplish this goal. The outcome was the copyrighted Dynamical Integrated Forecast (DICast) System. DICast ingests output from multiple models and applies a two-step process to optimize the blending (Myers et al. 2011; Mahoney et al. 2012). First, the biases of each model are removed using a dynamic version of MOS. Second, gradient descent methods are used to optimize the weights assigned to each of the models for each particular lead time at each particular location. Thus, the results are very specific to the relative performance of each input model at each location for each lead time. As with MOS, this information is learned by DICast from historical observations and forecasts and the system is updated dynamically. Although DICast has evolved over the last two decades, it is still being used as the primary postprocessing engine by some of the best-known forecasting companies. It is one of the first forecasting systems that crossed over from applications of statistics to AI.
We choose not to dwell on differentiating between the complexity that evolved in the statistical methods from AI, but rather take the point of view that we do not need to. We prefer to consider it as a continuum of statistical/machine-learning techniques, and practitioners can draw from that full continuum to apply the right tool for each problem, which is certainly what has occurred. Some of the same researchers who were convolving multiple statistical methods began to look more broadly at the AI methods to apply to their problems. Many of the advances began to diverge from the expert-system approach and toward learning directly from the data. Neural networks (NNs) became a popular approach. Krasnopolsky et al. (1995) used a neural network to retrieve wind speeds from a microwave imager. Gardner and Dorling (1998) reviewed NNs and how they could be used in atmospheric sciences, such as in pattern classification, prediction, and function approximation. Hsieh and Tang (1998) described how some perceived difficulties with using neural networks in meteorological and oceanographic prediction can be overcome. Marzban and Stumpf (1996) used a neural network to diagnose circulations likely to lead to tornadoes. Other problems were more oriented toward optimization. During the same time period, Haupt (1996) began exploring using genetic algorithms to interpret the changes in eigenfunctions used in Markov models as the dimensionality was changed.
To advance the field, the AMS Committee on Applications of Artificial Intelligence in the Environmental Sciences taught a series of short courses, including in Orlando, Florida, in 2001, Seattle, Washington, in 2004, Atlanta, Georgia, in 2006, Corpus Christi, Texas, in 2007, Seattle in 2011, and Seattle in 2017. The lectures were archived in a book in 2009 (Haupt et al. 2009d). The committee also began to hold regular AI forecasting contests including for storm-type classification in 2008, precipitation-type classification in 2009, wind power forecasting in 2010, daily average solar energy prediction in 2014, and predicting rainfall from radar observations in 2015. These courses and contests encouraged more applications, and the field continued to grow. When the committee turned to the kaggle competition website (https://www.kaggle.com/) to host the contest in 2014, the kaggle developers won the top three spots, all using forms of gradient-boosted regression trees. This outcome spurred the meteorological community to begin employing these techniques as well.
b. Current applications and methods
As we embark on the 100th anniversary of the American Meteorological Society, the applications of AI to the atmospheric sciences are far too numerous to fully review. Instead, we will highlight a few areas of applications where AI techniques have facilitated substantial advances and point the reader to sources of further information on each of these. This section is organized by broad application area rather than by technique. The most successful applications have been built on a firm understanding of the underlying physics, allowing the practitioner to draw from the full continuum of methods as well as that knowledge of the physics to best solve the problem.
1) Weather forecasting
Forecasting the weather, the ocean state, the ecosystem conditions, and beyond is one of the focal points of applied environmental science. There has been a huge jump in data production as models move to higher resolution, new instruments are deployed, and more remote sensing methods allow unprecedented levels of detail. As access to these data grows, it becomes less possible for a human to absorb and integrate all of the information that it contains. Thus, it is not surprising that data-based techniques for forecasting the weather have been one of the most prevalent uses of AI in environmental science. MOS and DICast were certainly successful demonstrations mentioned above that whet the community’s appetite for more specific applications. McGovern et al. (2017) review the types of methods that have been applied to forecasting problems and provides examples of some recent successful applications to high-impact weather. Gradient-boosted regression trees proved the most accurate method for predicting storm duration, forecasting severe wind (Lagerquist 2016), and predicting severe hail (Gagne et al. 2017). Another application highlighted there is classifying precipitation using crowd-sourced data together with forecasts from physical models (Elmore et al. 2014, 2015; Elmore and Grams 2016). All of these examples incorporate knowledge of the physics into the training design and variable selection process. There have been a plethora of ways to smartly use AI in weather prediction, too numerous to review here.
Applications in sectors with very specific needs have emerged. As discussed in Part II of this series of chapters on applied meteorology in the AMS 100 Year Monograph (Haupt et al. 2019b), success in those types of applications relies on communicating with the end user and developing methods that the user will trust. Not only is accuracy needed, but also communicating an understanding of how to use the output. One example presented in McGovern et al. (2017) is aviation turbulence prediction. Multiple techniques have come together to meet these needs at the same time as advancing the underlying science (Williams 2009; McGovern et al. 2014). Part II (Haupt et al. 2019b) describes some successful aviation systems based on blending AI with physics. There have also been useful applications in air pollution meteorology. Gardner and Dorling (2000) showed that an NN performed better than either linear regression or classification and regression trees. Pelliccioni et al. (2003) coupled NNs and dispersion models to optimize the important variables of the dispersion model, then Pelliccioni and Tirabassi (2006) used those integrated models on traditional observational dispersion datasets and showed improvements upon using the dispersion models alone.
AI has been a prevalent method to also predict extreme events such as sea level and coastal effects. Hsieh (2009) describes ways to use nonlinear principal component analysis, based on NNs, to better analyze tidal data. Tissot et al. (2002) and Cox et al. (2002) report integrating NN and statistical approaches to predict water levels in the microtidal shallow waters of the Gulf of Mexico where atmospheric forcings often dominate. Collins and Tissot (2015) used an NN to predict thunderstorms in southern Texas. Roebber et al. (2003) applied an ensemble of neural networks to the problem of predicting/diagnosing snow density and the technique was subsequently implemented at NOAA’s National Centers for Environmental Prediction as part of their national snowfall guidance. McCandless et al. (2011) compared multiple AI methods for predicting snowfall and found that there are a myriad of ways to improve such forecasts. Jin et al. (2008) combined an evolutionary genetic algorithm with an NN to form a genetic NN and used it for ensemble prediction of typhoon intensity and showed that it overcame the overfitting problem.
Part II of this series (Haupt et al. 2019b) also discussed forecasting for renewable energy and provided some examples of how making forecasts more accurate enables utilizing higher penetrations of these variable renewable resources. It also opens an opportunity to advance the AI techniques to meet their goals. For instance, short-range forecasting, or nowcasting, allows the utilities to foresee ramps in the production of renewables; both up ramps and down ramps can disrupt the energy system. To deal with such ramps, the utility must be able to plan to adjust other power units in compensation. That chapter reviews the plethora of methods used for renewable energy and how they have helped enable deploying more of this variable resource.
2) Probabilistic forecasting
Another area of forecasting ripe for advances using AI is probabilistic forecasting. The current approaches to probabilistic forecasting involve running ensembles of NWP simulations with perturbations to the initial conditions, boundary conditions, physics parameterization, or even base model dynamics in an attempt to quantify the uncertainty. That approach requires a large computing resource to accomplish those goals, particularly to run a sufficient number of ensemble members to span the uncertainty space. Once again, statistical methods have been developed to “dress” an ensemble to improve its reliability (e.g., Raftery et al. 2005).
However, AI methods have emerged that go beyond that approach to work with a single NWP run and historical data. Krasnopolsky (2013) reviews the use of NNs to form ensembles for various applications. He compares nonlinear approaches to linear ones and demonstrates marked improvements using the nonlinear approaches for several variables.
Another useful technique for generating AI ensembles is evolutionary programming (EP). Roebber (2015c) used EP methods to evolve ensembles, demonstrating that smaller temperature RMSEs and higher Brier skill scores could be generated than with a 21-member operational ensemble. Roebber (2015b) then showed that this method was also successful for minimum temperature forecasts, and then he demonstrated further improvements for adaptive methods (Roebber 2015a).
Composing analog ensembles (AnEn) has been shown to be as accurate and reliable as running a substantial number of ensemble members (Delle Monache et al. 2011, 2013). This method utilizes a single high-quality model simulation that has corresponding observations. For each forecast, a search is made for the closest historical forecasts. The matching observations to those analogous forecasts then form an ensemble. This method has already been applied to forecasting wind (Alessandrini et al. 2015a; Haupt and Delle Monache 2014), solar power (Alessandrini et al. 2015b; Cervone et al. 2017), and air quality (Djalalova et al. 2015), among others. Current research is showing how this method can also be applied in gridded forecasting (Sperati et al. 2017). This is an example of how novel AI applications can reduce the need for large computational resources, which can allow running higher-resolution NWP more frequently while still producing a probabilistic forecast.
3) Climate applications
In the longer term, understanding, predicting, and interpreting the stressors for climate is an important application for AI. AI can accomplish some of the tasks that have been needed to go the next step in interpreting the results of global climate models (GCMs). Pasini (2009) describes how neural networks can be effective at downscaling data from GCMs to more local scales by training to appropriate data.
Hsieh (2009) and collaborators began exploring nonlinear principal component analysis (NLPCA), demonstrating its applicability on chaotic systems and then on various problems such as simulating sea surface temperature and sea level pressure. This method fits a nonlinear curve rather than a straight line when forming the principal components, thus requiring a method such as an NN to accomplish the fit. It can be used to demonstrate the major modes of climate variability, including the Atlantic Oscillation, Pacific–North America teleconnection, El Niño–Southern Oscillation, quasi-biennial oscillation, Madden–Julian oscillation, and more as reviewed by Hsieh (2009) and discussed in detail in the papers referenced therein.
Various AI methods have been used to study predictability because they more easily generalize to the nonlinear realm. The Lorenz three-dimensional attractor (Lorenz 1963) is often the first difficult nonlinear dynamical system tested, and it has been modeled using various AI techniques. Monahan (2000) demonstrated that NLPCA can capture the general map of the Lorenz attractor. Cannon (2006) showed the efficacy of using multivariate NNs to capture intersite correlations for that same Lorenz attractor. Haupt (2006) used a genetic algorithm to fit a nonlinear matrix of Markov process coefficients to the Lorenz system and was able to capture the general shape of the butterfly attractor. Pasini (2009) tested local predictability of the Lorenz attractor using NNs by analyzing frequency distributions of distance errors. As expected, the quasi-bimodal distributions are sensitive to the closeness to transition from one wing of the butterfly to the other.
One can also use AI to study long-term climate based on measured data. Pasini et al. (2017) built a NN model of climate over the past 160 years using both anthropogenic and natural environmental variables that resulted in a high agreement with observations. This allowed them to them fix certain variables to determine changes in the model under differing assumptions. When anthropogenic forcing was set to preindustrial levels, the results deviated substantially from those observed, indicating that those anthropogenic forcings were associated with the changes in temperature that have been observed. This process also allowed them to analyze the natural variability, look for associations, and study the uncertainties in the analysis.
Finally, we note that various applications need smartly postprocessed climate information and AI methods can greatly aid that process. For example, the energy industry wishes to estimate projected changes in the wind and solar resource under a changing climate. To address this issue over the United States, Haupt et al. (2016) leveraged current reanalysis data as well as model output from regional climate models and a series of AI and statistical methods to create resource estimates of current and projected future climate that contain similar patterns. To do that, they computed self-organizing maps (SOMs) of the current climate reanalyses, then projected the future climate simulations onto those same SOMs. After correcting for changes in temperature and other variables, a future climate database was generated through Monte Carlo sampling of the patterns representative of the specific time of year. That database allows direct comparison with the current climate data. Regional and seasonal variability was evident in the projected changes in the wind and solar resource.
A major class of problems for which AI applications have demonstrated progress is in optimization. In optimization problems, we often know a final state or a series of boundary conditions, and want to find a solution that fits those conditions. Quite a few problems can be cast in terms of optimization. To do that, one must define an objective, or cost function, that is to be minimized (or maximized).
Here we focus on genetic algorithms (GAs) as an example method that is robust at finding global minima of a cost surface without the necessity of being able to take derivatives, as required for some of the standard gradient-based methods. GAs can work with extremely complex cost surfaces and simultaneously search a wide sample of the cost surface for the best solution; therefore, they are less likely to become stuck in local minima. John Holland first introduced GAs in the 1960s and 1970s (Holland 1975), but they were popularized by his student David Goldberg (Goldberg 1989). Quite a few flavors of genetic algorithms have been developed since that time. Although they were originally coded as binary GAs, the more versatile continuous or real-valued GAs became more popular. An advantage of the GA is that one can simultaneously search for binary, continuous, and integer-valued parameters in a single problem (Haupt et al. 2011).
The GA mimics a combination of cellular mitosis and evolution to reach solutions that fit the prescribed conditions. They begin with a randomly constructed population of chromosomes, which are strings of encoded variables represented in the cost function. Each chromosome is fed to the cost function for evaluation, and the cost ranked. The best, or “most fit,” chromosomes survive to the next generation while the rest die off. Those fit chromosomes form the mating pool. The operation of mating combines information from two chromosomes to produce offspring chromosomes. The mutation operation causes random changes in some chromosomes. These two operations of mating and mutation allow exploration and exploitation of the cost surface in an iterative fashion, allowing evolution toward the global optimum of the cost function. This process is illustrated in Fig. 24-9. More details can be found in Haupt and Haupt (2004), among other references.
GAs have been applied to a wide range of problems. They have been used to solve inverse problems, to design optimal solutions, to demonstrate a dynamic assimilation method (Haupt et al. 2009b, 2013), and even to solve nonlinear partial differential equations (Haupt 2006). One series of problems involved estimating the source term of an unspecified pollution source. When one measures levels of air contamination, it is often desirable to be able to apportion that contamination to its sources. When there is insufficient information to measure percentages of contaminant, one can combine information on wind direction and speed to estimate how dispersion may have occurred from various sources in the region. Genetic algorithms have been shown to be successful at such estimations (Haupt 2005, 2007; Haupt et al. 2006, 2009a; Allen et al. 2007a,b; Cervone and Franzese 2011). Defense agencies use such techniques to identify the location and release amounts for potentially unknown releases of hazardous contaminants and the GA has proven to be competitive with other methods, including Bayesian and variational methods (Bieringer et al. 2017; Petrozziello et al. 2016). These methods have also been used to estimate the amount of volcanic ash emitted (Schmehl et al. 2012). Kuroki et al. (2010) used a genetic algorithm combined with an expert system to determine best paths to guide an unmanned aerial vehicle to sample a contaminant in order to back-calculate the source parameters.
Other optimization problems have also found evolutionary strategies, such as GAs, to be useful. Mulligan and Brown (1998) used a GA to calibrate a water quality model by estimating optimal parameters. They showed that the GA works better than more traditional techniques plus that the GA has the added capability to provide information about the search space, enabling them to develop confidence regions and parameter correlations. Other water quality studies use GAs to determine flow routing parameters (Mohan and Loucks 1995), size distribution networks (Simpson et al. 1994), solve groundwater management problems (McKinney and Lin 1993; Rogers and Dowla 1994; Ritzel et al. 1994), and calibrate parameters for an activated sludge system (Kim et al. 2002).
Peralta and collaborators have combined GAs with neural networks and simulated annealing techniques to solve problems with managing groundwater supplies. Aly and Peralta (1999a) fit parameters of a model to optimize pumping locations and schedules for groundwater treatment with GAs. In a next step, they combined an NN with the GA to model the complex response functions (Aly and Peralta 1999b). Then, Shieh and Peralta (1997) combined simulated annealing with GAs to maximize efficiency. Fayad (2001) together with Peralta looked at managing surface and groundwater supplies using a Pareto GA with a fuzzy-penalty function to sort optimal solutions, while using an NN to model the complex aquifer systems in the groundwater system responses. Chan Hilton and Culver (2000) used GAs to optimize groundwater remediation design.
5) Emulating processes
Many environmental processes are extremely complex, and our knowledge of precisely how they work is somewhat limited (e.g., cloud physics). Others can be modeled but are expensive to implement computationally (e.g., radiative transfer). An alternative is to emulate processes with AI models. Krasnopolsky (2009, 2013) has been quite prolific in developing these methods, which are reviewed in those two overview works. Most modern forecast models of physical processes are based on partial differential equations derived from first principles plus a series of physics parameterizations. Those physics parameterizations typically describe processes that are only partially understood and are often a combination of known physics and empirical coefficients derived from data. So a question is whether an AI technique can effectively model such processes, forming a hybrid model. A first problem treated by Krasnopolsky and collaborators was emulating the longwave radiation (LWR) component of a GCM, specifically NCAR’s Community Atmospheric Model (CAM). They performed this emulation using data produced by the original LWR scheme in CAM, which is a computational bottleneck, by training an NN with 50 hidden nodes. The resulting emulation produced results that are barely distinguishable from the original CAM runs. Similar accomplishments were possible for shortwave radiation (SWR) and in other climate models. When both LWR and SWR schemes were emulated with NNs, the run time sped up by a factor of 12 while preserving the original accuracy.
Similar advances have been accomplished for emulating nonlinear interactions in wind wave models (Krasnopolsky 2009) and for cloud parameterizations (Krasnopolsky et al. 2013). A model of the surface layer of the atmospheric boundary layer was constructed using NNs by Pelliccioni et al. (1999). Note that this approach could be very promising for future applications but does require a series of training data that is sufficiently representative to cover all possible observations.
6) Image processing
Lakshmanan (2009) reviews methods for automating spatial analysis. He analyzes the features that make spatial analysis important, such as the inherent correlations between neighboring points. This work recognizes that each work flow includes essential processes, or elements such as filtering, edge finding, segmentation, feature extraction, and classification. Some of these processes, such as the classification element, are quite amenable to AI techniques, such as NNs. Putting all of these processes together constitutes a machine learning application.
Krasnopolsky (2009) describes methods to extract information from satellite remote sensing in the ocean environment. He discusses how to use NNs for mapping processes, then how to apply NNs for both emulating the forward models as well is for solving the inverse problems that constitute retrievals. Young (2009) provides a practical application example. These merely scratch the surface of using traditional AI for image processing and the reader is referred to the extensive literature on the topic.
The rise of deep learning is revolutionizing image processing. “Deep learning” refers to neural networks with many more hidden layers than are traditionally used and to the corresponding techniques to take advantage of them. This inherently image-based method is finding its way into atmospheric science problems. Methods such as convolutional neural networks, generative adversarial networks, recurrent neural networks, and more are currently being applied to problems in image processing and identification. Applications of Deep Learning in the atmospheric sciences have begun, including identifying, predicting, and interpreting hail processes (Gagne et al. 2018, 2019, manuscript submitted to Mon. Wea. Rev.), creating radar-like precipitation analyses for aviation applications (Veillette et al. 2018), identifying atmospheric rivers in climate simulations (Mahesh et al. 2018), improving the use of satellite data for model initialization (Lee et al. 2018), and climate downscaling (Vandal and Ganguly 2018), among others.
c. Prospects for future advances
Because AI is a rapidly evolving field, it is difficult to predict the advances in the next decade and beyond. The current topics of research are expected to continue to advance with new techniques emerging. Whole new paradigms may arise that replace how we think about artificial intelligence and machine learning. However, let us look at some of what we might see given the criticisms of using these techniques as well as recent advances.
One of the primary criticisms of AI in the applications community is the perception of the physicists that many of the methods are a “black box.” That is changing as AI practitioners focus more on interpretability. Some methods, such as decision trees, can be readily interpreted; for others, including neural networks, one must be careful not to interpret the weights as having physical meaning. Now many methods assist the user in understanding variable importance, which may lead to a deeper understanding of the physics. But many of the problems where AI is applied are inherently nonlinear, and it is difficult to tease out the relationships in meaningful ways. In these cases, the practitioner may need to be clever to design numerical experiments to interpret the results. One recent example of how AI is being used to test the impact of specific variables on the outcome and attribute the result to the most important variables is by Pasini et al. (2017). This work is an example of using NNs to determine the most important variables contributing to the observed patterns of long-term temperature changes. They argue that using these methods, they can build in more independence to their attribution studies than is possible using global climate models. As interpretability becomes a priority, the AI community, particularly those who seek applications in the environmental sciences, are helping to develop and test methods to learn physics from the applications of AI methods, including deep learning (Gagne et al. 2019, manuscript submitted to Mon. Wea. Rev.).
Advancing forecasting has been an important application for AI, yet there is much left to be done. There have been promising results in using regime dependence to classify conditions and then training AI methods separately for the regimes. This approach showed potential for temperature forecasts (Greybush et al. 2008), which used principal component analysis to distinguish weather regimes. For forecasting solar irradiance McCandless et al. (2016a,b) used k-means clustering to separate the cloud regimes, then trained a NN for each regime separately, showing improvement over a single NN model when sufficient training data were available. Regime dependence can be determined either implicitly by a technique (such as using tree-based methods, which essentially categorizes in the first splits of the tree) or by explicitly applying a categorization method (such as a clustering method) and training each cluster separately with the preferred AI method. In addition, numerical weather prediction can be further combined with AI to emulate processes as discussed above, thus speeding the calculations and, perhaps, even improving upon empirical models of some of those processes.
Another direction to advance forecasting is likely to come from embracing gridded methods, which allow object identification and classification. Such methods could allow identifying objects and translating, stretching, or morphing them according to the behavior of similar objects in historical data. Like many applications in the atmospheric sciences, this is likely to grow from the statistical methods. Assessment methods to compare objects that use these techniques have already been developed and are pushing advances in comparing model output to observations by including these object-based methods (Gilleland 2017).
The biggest development in the greater AI community in the past decade has been applications of deep learning. The growth of data stored on digital computers has allowed sufficient data to train all the weights required for such deep networks. These approaches could advance solutions to some of the problems mentioned above in new ways. They intrinsically will identify regimes and take a fully gridded approach to forecasting. As we write this monograph, the AI community is in the “irrational exuberance” stage of infatuation with such methods. Although some level of disappointment in not meeting all promises currently offered is inevitable, we expect that these methods have the potential to advance the field into a new era of being able to interpret and better utilize the large amounts of data currently being generated in new ways for new uses.
In 2015, the AI world was astounded when AlphaGo, a deep-learning system, beat a world champion human Go player (Silver et al. 2016, 2017). Go had been considered one of the hardest problems for AI to solve. However, by combining supervised learning of value networks using human experts with reinforcement learning of policy networks as well as the value networks through playing games, the AlphaGo system beat other programs 99.8% of the time as well as the European champion by 5 games to 0. It is noteworthy that supervised learning leveraging human knowledge was a key component to its success. In much the same way, in environmental science, many of the advances cited above are due to humans with knowledge of the physical systems cleverly configuring AI methods to make the most innovative progress.
As we move toward the next generation of computing, paradigms may change. It is yet unknown whether continuing to increase the number of processors (many core approach) will be the architecture of the future, or whether graphics processing units (GPUs) will dominate the next computers. New paradigms are likely to develop that will facilitate new techniques. Deep learning may overwhelm the methods that have gone before. Or perhaps those other methods may find a permanent place in our arsenal of techniques. Moreover, whatever evolves, it is likely that these methods will enhance our understanding of the environment, advance our ability to model and predict it, and motivate many existing and new applications in the environmental sciences.
6. Summary and concluding thoughts
This series of chapters on 100 Years of Progress in Applied Meteorology has just scratched the surface of the many applications accomplished in our field, let alone those that are possible. The first part of this series (Haupt et al. 2019a) dealt with some of the oldest and most basic applications—those in weather modification, applications to aviation, and security applications. We saw that although those applications had started quite some time ago, they continue to grow. In addition, research in the applied realm feeds back into enhancing our understanding of the underlying physics and dynamics. The second part (Haupt et al. 2019b), together with the first section of this part, has emphasized those applications that directly deal with providing for a growing population by studying urban meteorology, energy applications, air pollution meteorology, applications to surface transportation, and applications that enable agriculture and food security. We saw that decision-support systems can aid these applications. Although many of these applications have been longstanding, they also continue to evolve. We need to provide our knowledge plus weather and climate information not only to help improve human interaction in these areas, but also to show how these human-made problems impact the environment in very visible ways. Thus, it is critical that we continue to advance the science for both points of view so that science can help provide for the growing population and also help us to understand how that burgeoning population impacts the environment that we depend on for the resources to survive and provide a quality lifestyle for all humankind.
The remainder of this chapter has emphasized some applications that are evolving very rapidly. Section 3 described the development of space weather models, which are still in their early stages. The difficulty in observing these phenomena and integrating those observations into the models makes it very difficult to advance the state of the science. However, advances are coming more rapidly now with increased access to space observation systems.
In section 4, we saw that although wildland fire modeling in some sense has been ongoing, the use of fully coupled atmosphere–wildland fire models is relatively new. It is only in these coupled systems that the heat, moisture, and other impacts of the fire feed back to the atmosphere, allowing modeling of some of the important phenomena, including whirls, spotting, and other aspects of fire behavior as well as the development of pyrocumulus and pyrocumulonimbus.
The final section treats applications of artificial intelligence to problems in the environmental sciences. Although ancient humans codified their observations into knowledge, it is only with the advent of modern computers that we can learn directly from data. We discussed the many applications and the movement toward newer AI methods that could revolutionize science.
The observational abilities of early humans focused on identifying particular elements, such as the earth–air–fire–water model of Empedocles in ancient Rome. Our current approach to science, taking systematic approaches to building first-principle models that are based on observations, is also evolving as we learn directly from the observations and discover the holes in our knowledge.
Many of the applications described in this series of chapters have resulted in some type of decision-support system that enables better planning by users of the information, whether it be better managing wildland fires, planning how to integrate the variable renewable energy resources into the electric grid, or planning when to fertilize and irrigate crops. These decision-support systems often include both the first-principle models and AI models, working together to optimize the information provided to the decision makers. As we discussed in Part II (Haupt et al. 2019b), when building those systems, it is perhaps more effective to use an information value chain approach in order to hear from the end users what they really need before building a system to measure, model, interpret, and provide the weather and climate information to that user. We have also seen in this series of chapters that to build such applications requires more research, which in turn feeds our understanding of the systems that we model.
In Part I (Haupt et al. 2019a) of this applied meteorology series we began with a quote from Walter Orr Roberts, first director of NCAR, with which we wish also to end. He said, “I have a very strong feeling that science exists to serve human betterment and improve human welfare” (NCAR 2018). We have certainly made huge strides in this direction, but there are many more to make. Future generations will have a plethora of opportunities to contribute by using meteorology to make the world a better place.
Authors S. E. Haupt, S. McIntosh, B. Kosovic, F. Chen, and K. Miller were supported, in part, by NCAR funds. NCAR is sponsored by the National Science Foundation. Author Chen also acknowledges support from USDA NIFA Grants 2015-67003-23508 and 2015-67003-23460 and NSF Grant 1739705. The authors thank two anonymous reviewers, whose thoughtful comments and suggestions led to an improved chapter.