The history of over 100 years of observing the ocean is reviewed. The evolution of particular classes of ocean measurements (e.g., shipboard hydrography, moorings, and drifting floats) are summarized along with some of the discoveries and dynamical understanding they made possible. By the 1970s, isolated and “expedition” observational approaches were evolving into experimental campaigns that covered large ocean areas and addressed multiscale phenomena using diverse instrumental suites and associated modeling and analysis teams. The Mid-Ocean Dynamics Experiment (MODE) addressed mesoscale “eddies” and their interaction with larger-scale currents using new ocean modeling and experiment design techniques and a suite of developing observational methods. Following MODE, new instrument networks were established to study processes that dominated ocean behavior in different regions. The Tropical Ocean Global Atmosphere program gathered multiyear time series in the tropical Pacific to understand, and eventually predict, evolution of coupled ocean–atmosphere phenomena like El Niño–Southern Oscillation (ENSO). The World Ocean Circulation Experiment (WOCE) sought to quantify ocean transport throughout the global ocean using temperature, salinity, and other tracer measurements along with fewer direct velocity measurements with floats and moorings. Western and eastern boundary currents attracted comprehensive measurements, and various coastal regions, each with its unique scientific and societally important phenomena, became home to regional observing systems. Today, the trend toward networked observing arrays of many instrument types continues to be a productive way to understand and predict large-scale ocean phenomena.
This chapter on ocean observing briefly summarizes the history of recent scientific observation of the ocean, emphasizing how new observational capabilities have led to increased understanding of climate dynamics and interaction of the ocean and atmosphere. On the scales of the oceanic mesoscale and larger, the ocean and atmosphere are in many ways dynamically similar, but there are substantial differences in how they are observed. Key reasons for the differences are that the ocean is bigger than the atmosphere in terms of eddy scales and human movement; that it is opaque to light and radio waves; and that it has an unbreathable composition, high hydrostatic pressures, and harsh sea states. These complicate observing and increase cost. For example, harsh sea states and large oceans demand expensive large ships and crews. Indeed, large ships and crews may be why oceanography is so multidisciplinary. Most science cruises have carried projects in several areas of oceanography (biology, chemistry, geology, biogeochemistry, geochemistry, microbiology, and physical oceanography) to utilize the ship resource.
While oceanography is multidisciplinary, a size limit demands that this chapter not be. As a chapter in a largely meteorological book, our focus is on physical phenomena in the ocean that are linked to processes in the atmosphere on the scales where ocean–atmosphere interaction is most apparent, say, time scales >O(1) day and horizontal scales >O(10) km.
Analysis and modeling of circulation physics might have grown faster with a stronger observational database, but the early database grew slowly because few observations could be made without elaborate and expensive gear between observer and target. Scarce measurements and a big ocean challenged modeling and emphasized getting more numerous and better observations. At the same time, modeling was a way to evaluate observations, provided rational array designs, and motivated observations of physical, geochemical, and biological interactions.
Improved observations came from two main strategies: 1) expanding the suite of instruments to measure more properties over a larger scale range and 2) deploying more instruments in networks to cover larger areas over longer times. Instruments came primarily from a cycle of investigator invention, field testing, and reinvention through adaptation. From the 1970s, large-area, and particularly multidisciplinary, investigations were attacked with coordinated and often networked individual sensors. At first, the networked sensors only broadcast data, but were then updated with Iridium communications so that the sensors could be issued new instructions. This allowed the observing–modeling–analysis cycle to close meaningfully.
Most early scientific ocean observations were made from single-ship expeditions with the goal to chart ocean geology, biology, chemistry, and circulation. The archetype was the Challenger Expedition on a converted British warship that, over 1872–76, followed a complex path between northern midlatitudes and Antarctica as it circled the globe. Water depth was measured by a rope weighted by a 500-kg sinker and adorned with visual depth marks. The bottom was dredged; biological samples were collected with bottles, shallow nets, trawls, and drawings. Water temperature was profiled by “minimum T” thermometers that did not measure temperature where it increased with depth, limiting the data’s ability to measure global warming (Roemmich et al. 2012). Chemistry was sampled in bottles and analyzed on board. Visually tracked surface drifters and drogued buoys measured upper ocean velocity. This huge effort, equivalent to a moon shot, was followed by a public whose interest had been awakened by Darwin’s still fresh discoveries and the era’s general spirit of exploration.
Today’s shipboard measurement types (section 2), are similar to those in 1872, but the questions have matured. Hydrography is still a key to circulation studies, and variations of seawater composition remain essential, but now as much for understanding processes as a descriptive tracer. Water sampling includes stable and transient tracers, along with multiple chemical species, to describe biogeochemical processes. Early in the twentieth century, scaling analysis and measurement consistency brought wide acceptance that large-scale, low-frequency ocean circulation was in geostrophic balance, making large-scale ocean circulation observable. The first Ekman current meter, which mechanically sensed and averaged both speed and direction, went into service before 1910. The first shipboard acoustic Doppler current profiler (ADCP) was used in the early 1980s (Regier 1982). Energetic creativity has kept ship measurements modern and productive, and a growing international research fleet made long hydrographic transects the basis for understanding large-scale ocean circulation (Wüst 1964).
This chapter is divided into nine sections. Several address specific classes of sensors and ways to measure the ocean along with the phenomena they have described: ships in section 2; moorings, Argo floats, and underwater gliders in sections 3 and 4; and moored velocity and air–sea flux measurements in sections 5 and 6. Other sections address groups assembled to address the special ocean–atmosphere issues of specific regions. For example, section 7 discusses El Niño–Southern Oscillation (ENSO) and the Tropical Ocean and Global Atmosphere (TOGA) array, while section 8 explores coastal ocean observing systems, and section 9 examines Arctic Ocean science. In many ways, large organized experiments are similar to Ocean Observing Systems except they have planned ends. We also discuss two important early experiments and an Ocean Observing System.
Complex mesoscale patterns in satellite temperature imagery, strong subsurface flows discovered by direct measurement, and mesoscale eddies in models all motivated the milestone 1971–73 Mid-Ocean Dynamics Experiment (MODE; MODE Group 1978). Established and new in situ measuring techniques [tall, long-range acoustic sound fixing and ranging (SOFAR) floats, vector averaging current meters] were well tested by intercomparisons, pilot measurements, and objective array design, and were used to design a large multi-instrument network to observe mesoscale motions and currents through 1971–73. Results were analyzed over another 2 years. Analysis of mooring, float, and hydrographic data within the context of numerical modeling provided a new understanding of mesoscale ocean dynamics and their modeling. This motivated investigators to explore applying databased modeling to the global ocean.
By the 1970s, the conceptual framework for understanding El Niño was established and El Niño’s impacts were recognized (Bjerknes 1969). Understanding was emerging of the specific mechanisms for tropical Pacific surface temperatures to respond to varying trade winds (e.g., Wyrtki 1975b) and how tropical sea surface temperature (SST) patterns affected winds through deep atmospheric convection. These developments led the TOGA program to examine predictability of ENSO with theory, modeling, and a large observing network (see section 7) to track seasonal and interannual variability. TOGA was the first large-scale Ocean Observing System driven by societal goals, and its extent, diversity, and investment (Hayes et al. 1991) were unprecedented. TOGA’s diverse observing network well described ENSO-like tropical variability and supported the tuning of dynamical models of ENSO, leading to the first successful El Niño prediction for 1986/87 (Cane et al. 1986).
Through 1980–90, interest grew to understand the global ocean circulation, its transport of heat, and its interaction with the atmosphere. A nearly global hydrographic survey was designed, containing many control volumes for inverse analyses, and serving as the foundation for data-assimilating numerical models. It was impractical to observe and invert the entire global ocean at one time with useful resolution. Instead, the World Ocean Circulation Experiment (WOCE) divided the ocean into control volumes and, over the period 1990–98, measured them in sequence with intervolume transports deduced primarily from hydrography (Siedler et al. 2001). WOCE global hydrographic observations were supplemented mainly with surface drifters, moored arrays across major boundary currents and circulation chokepoints, and the first midlevel profiling floats; the in situ observations were partnered with evolving satellite observations of the ocean and atmosphere. The array was designed to support inverse analyses to resolve poorly measured quantities like velocity at depth and air–sea fluxes of heat and water. These first comprehensive global ocean observations remain invaluable as large-scale modeling and analysis turn to global change. WOCE was a technical success, as shown by comparing inverse analyses with data-assimilating models and extensively exploring the assimilating procedure. It also met programmatic standards for integral metrics of performance. This has motivated new work to expand assimilation in the context of repeated global coverage, and other improvements needed to meet the Global Ocean Observing System requirements.
2. Evolution of ship-based hydrographic measurements
For the first several hundred years of ocean exploration, monitoring, and research, ships (Fig. 3-1) were the only way to observe the open ocean and most of the coastal ocean and, therefore, to deduce property and current distributions and hence ocean processes and dynamics. Although global-scale autonomous and satellite measurements began in the 1970s, ships continue to provide essential high-quality observations throughout the world oceans that provide calibration and context for developmental, autonomous, and satellite measurements. There is real synergy between these observing methods: ships do the elaborate and precise measurements, deal with developmental projects, and handle heavy gear; satellites provide global coverage for several variables; and autonomous devices provide low-cost extended in situ sampling. The synergy creates a powerful observing system.
One hundred years ago, World War I (WWI) was just ending. Oceanographic research was conducted entirely from ships, with the newest oceanographic research ships powered by both coal and sails (Fig. 3-1a). The late nineteenth and early twentieth centuries prior to WWI were rich in terms of the first global and high-latitude expeditions and blossoming understanding of ocean thermodynamics (salinity, temperature, density) and dynamics, principally in the Norwegian school, alongside growing sophistication in fluid dynamics and atmospheric dynamics, much of that in Great Britain. Post-WWI, the 1920s and early 1930s saw an explosion of oceanographic data collection, with major expeditions covering all of the oceans. Syntheses of observations, such as those by Wüst (1935) and Deacon (1937) and culminating in the masterful chapters on ocean circulation and properties by Sverdrup et al. (1942), provided groundwork for midcentury advances in ocean dynamics and notable textbooks (e.g., Defant 1961).
World War II (WWII) brought a new era in oceanographic exploration, based in technologies and observing systems established by navies, including Ocean Weather Stations, to serve the aviation industry. Within a decade, the International Geophysical Year (IGY, 1957–60) carried out complete surveys of the Atlantic and Pacific, started a second complete Southern Ocean survey, and did groundwork for the 1960s Indian Ocean survey. The IGY years also included major expansion of modern measurements and understanding of ocean chemistry and biogeochemistry, including ocean carbon, isotopes, transient tracers, and expanded sampling of nutrients and oxygen [see listing in Marson and Terner (1963)]. Notable engineering advances in these years included the precision salinometer (Hamon and Brown 1958) and the earliest electronic conductivity–temperature–depth profiler, which evolved into the CTD (WHOI 2005).
Ship technology and instrumentation have continued to evolve. Modern research ships benefit from dynamic positioning, improved ballasting and roll tanks, and satellite global positioning system (GPS) navigation. Evolving winch and crane designs and conducting cable facilitate handling new instrumentation. Typical research cruises now include towed, undulating instruments, expendable bathythermographs (XBTs), rosette samplers replacing Nansen bottles, ADCPs (Rowe and Young 1979), CTDs that evolved from the MKII to 911, and deployments of many different types of autonomous instruments. Through the 1960s, 1970s, and 1980s, research ships continued to ply the global oceans, always observing temperature and salinity and using various approaches to expand mapping of geochemical and biogeochemical tracers [e.g., the Geochemical Ocean Sections Study (GEOSECS) program in the 1970s (Craig 1972) and Transient Tracers in the Ocean (Brewer et al. 1985) in the 1980s]. These less internationally coordinated surveys nevertheless provided relatively good coverage of ocean temperature and salinity from top to bottom, and have provided the foundation for climate records of ocean heat and freshwater content. Because of reasonable spatial coverage and good measurement accuracy post-WWII, analysis of such climate trends usually begins with the 1950s (Rhein et al. 2013).
The advent of global sea surface height satellite measurements, which began with the short-lived Seasat mission in 1978 (NASA 2018a), has been continuous since the launch of Ocean Topography Experiment (TOPEX)/Poseidon in 1992 (NASA 2018b). There was international motivation to again observe the oceans systematically, resulting in the WOCE (Woods 1985; Nowlin 1987; NODC 2002). The WOCE Hydrographic Programme (WHP) strategy (Fig. 3-2b) was based on requirements for quantifying transports that had evolved in the 1980s (e.g., Roemmich and Wunsch 1985; Talley et al. 1991): hydrographic sections go from coast to coast, from top to bottom, and include close station spacing (nominally 1/2° latitude with closer spacing in boundary currents and over topography) in order to cover quantitatively all circulation elements crossed by the sections and provide volume budgets.
In the process of carrying out these sampling requirements, the WHP provided enough information to construct heat and freshwater inventories of the global ocean, which have served as a climate change benchmark for the more recent decadal hydrographic surveys in CLIVAR (Climate and Ocean: Variability, Predictability, and Change; http://www.clivar.org/) and now GO-SHIP (Global Ocean Ship-Based Hydrographic Investigations Program; http://www.go-ship.org/index.html).
The WHP executed basin-scale surveys from the late 1980s through 1997 (WOCE 2002). In addition to temperature and salinity, the systematic coverage importantly included direct velocity profiling with an ADCP, biogeochemistry (oxygen, nutrients, carbon system), and transient tracers useful for ventilation time scales (chlorofluorocarbons, tritium, helium isotopes, carbon-14). Underway measurements with very high sampling resolution also became a requirement, including temperature, salinity, velocity, and pCO2 as well as meteorology and bathymetry. The underway pCO2 network evolved to become SOCAT (Surface Ocean CO2 Atlas; https://www.socat.info/), one of the most important ongoing ocean carbon datasets. Additionally, WOCE was central to development of many new measurement techniques discussed later, including the global profiling float Argo program.
The Global Ocean Observing System (GOOS), an outgrowth of the many observing systems and strategies that matured or were developed during WOCE, includes a subset of WHP sections that are occupied every 7–10 years, crossing each deep ocean basin, and also some more regional coast-to-coast sections at much higher frequency. International coordination was informal through the 2000s, when repeat hydrography was considered part of the international CLIVAR and carbon programs. The program was formalized as GO-SHIP following the Ocean Obs’09 meeting in 2009; it is part of GOOS. GO-SHIP maintains a set of rigorous standards for the decadal repeat sections, including a required set of core measurements, measurement standards, spatial and temporal sampling requirements, and data management that requires public release as soon as datasets are completed and calibrated.
Hydrographic sections crossing each deep ocean basin, following the WHP sampling strategy (Figs. 3-2a,b), remain central to global, decadal assessments of changes and variability in ocean heat, freshwater, carbon, oxygen and nutrient content, and large-scale overturning circulation [examples in Fig. 3 from Rhein et al. (2013) based on Purkey and Johnson (2010) and Khatiwala et al. (2013)]. The Argo profiling network (section 3) has mostly replaced the need for repeated research ship measurements of temperature and salinity in the upper 2000 m in the open ocean away from boundaries and is beginning to expand to the deep ocean (Deep Argo). As of 2018, Argo has not replaced highly accurate shipboard temperature and salinity measurements in the deep ocean, where a minor but significant fraction of anthropogenic ocean heat content resides (Fig. 3-3a). All Argo datasets require reference-standard measurements carried out globally by ships on an infrequent basis. Ships, which must be steered, are required for the repeated sections that are essential for estimating global changes to full depth. This is particularly important for deep waters and is likely to remain essential for some time. Biogeochemical (BGC) measurements include ocean carbon inventory and transports, which are evolving with the increasing anthropogenic CO2 in the climate system (Fig. 3-3b); acidification associated with increasing ocean carbon content and warming; ocean oxygen content changes that include expansion of very low oxygen regions in the tropics (Keeling et al. 2010); and ocean nutrient changes that affect productivity. Starting with GEOSECS in the 1970s, continuing with regional programs in the 1980s, then globally in WOCE in the 1990s, and in GO-SHIP over the past two decades, shipboard observations of the ocean’s carbon parameters have permitted mapping of the invasion of excess atmospheric carbon dioxide (anthropogenic CO2) into the ocean interior (Fig. 3-3b; Khatiwala et al. 2013) and the accompanying ocean acidification (Doney et al. 2009).
Continued quantification of the ocean’s role in the evolving planetary carbon budget using these ship-based tools is essential. BGC observing, similarly to global temperature/salinity sampling 15 years ago, has now evolved to include autonomous sampling alongside research ship sampling, and underway sampling (SOCAT) from both research ships and ships of opportunity. Pilot regional programs of BGC Argo profiling floats are providing a maturation of in situ BGC sensors (oxygen, nitrate, pH, optical-chlorophyll/particulate carbon; e.g., Johnson et al. 2017). However, autonomous BGC sampling requires substantial research ship support, such as carried out by GO-SHIP, for calibration and quality control. Algorithms that combine the BGC sensor information to produce other fields, including the full carbon system, require occasional research ship measurements as the relationships between parameters evolves (e.g., Williams et al. 2017). Thus, the requirement for continuing partnership between autonomous and ship-based observing is more stringent than that between core Argo (temperature, salinity) and ship-based observing.
Sampling the geochemistry of the global ocean has evolved from the 1970s GEOSECS program and ancillary WHP programs. Today the international program GEOTRACES provides global sampling of trace elements, micronutrients, and isotopes (stable, radioactive, and radiogenic), while GO-SHIP samples the carbonate system and ventilation tracers. GEOTRACES and GO-SHIP cruises are usually carried out separately because both require large technical groups that mostly do not overlap, although both require the same set of core measurements (temperature, salinity, oxygen, and nutrients) to understand the processes that govern the different tracer distributions measured by each program.
Velocity observations are used in circulation/transport analyses. Geostrophic velocities are estimated from temperature/salinity profiles. Velocity is also measured directly with ADCPs and has been synergistic with moored observations for many decades. Direct velocity profiles are combined with CTD profiles to calculate mixing-related quantities (Kunze et al. 2006; Huussen et al. 2012) based on parameterization of dissipation and vertical diffusivity arising from internal wave turbulence (e.g., Gregg 1989; Polzin et al. 1995; Gregg et al. 2003). These fields can be inverted to diagnose the overturning (diapycnal) circulation (e.g., Kunze 2017).
a. Time series stations and coastal surveys
Research ships have been used routinely since the 1920s to occupy time series stations in midocean basins. From 1940 until the 1980s, there was a Northern Hemisphere network of Ocean Weather Stations (OWS), which collected meteorological information for aviation purposes (Dinsmore 1996). Most included regular profiling of the ocean, providing long time series of ocean properties. Following the advent of satellite measurements of the atmosphere in the 1970s, most OWSs were abandoned, but some of the oceanographic time series were continued, notably Ocean Station Papa in the northeast Pacific and OWS Mike in the Norwegian Sea. A long oceanographic time series was initiated at Bermuda in 1954 [Bermuda Atlantic Time-Series Study (BATS)]. A similar time series, the Hawaii Ocean Time-Series (HOT), was initiated at Hawaii during WOCE, and both continue until the present, providing many decades of physical and biogeochemical and biological data.
Ship-based coastal surveys have been carried out for longer than a century by most coastal nations. Routine oceanographic surveys including hydrographic measurements (temperature, salinity, nutrients, oxygen) have been common since the 1930s and 1940s. Each region has been sampled differently and funding to support the long time series has had different sources, depending on the nation and region, but the purpose has generally been to understand the evolving ecosystem and relationship to physical structures. These long regional datasets have been essential for understanding coastal processes, circulation, and fisheries.
Autonomous sampling on moorings has begun to replace some of the aspects of these time series hydrographic stations and coastal observing systems, thus removing sampling/aliasing problems. The Ocean Observatories Initiative (OOI) has implemented moorings in some of these long-sampled locations, including not only temperature and salinity, but also air–sea flux and biogeochemical sensors, producing continuous time series where none existed before. Autonomous gliders are replacing ship functions along routinely surveyed coastal sections, but are limited by the sensors they can carry and ranges covered, typically 2500–3500 km. For suites of observations that can knit all of the components of the coastal systems together, including physical, biogeochemical, and biological, ships have been and remain essential.
b. The future
Ship-based observations will evolve and become further entwined with the growing autonomous observing systems (NASEM 2017). From the earliest days of ocean observing, physics, chemistry, and biology were sampled together. For many recent decades, these endeavors were separated into ocean–atmosphere–climate physics, biogeochemistry, and biology–ecology, but they are increasingly combined as their interdependence is again recognized, and as the high cost of operating ships in remote regions drives efficiencies. Ships operating within the GOOS and coastal ocean observing systems provide a full suite of core physical and BGC measurements and are platforms for development of novel techniques, including a growing presence of evolving biological measurements. Research ships will continue to provide the reference standards for accuracy required for growing autonomous sampling.
3. The evolution to the Argo observing system
During the 1970s, increasing interest in understanding air–sea interaction for extending the time scale of weather prediction focused on possible roles of atmospheric forcing of the ocean and oceanic forcing of the atmosphere (Namias 1972). The available datasets at the time consisted mainly of sea surface temperature and sea level pressure, both of which were collected by commercial, naval, and research ships making routine meteorological observations. No clear evidence was found of oceanic forcing of the atmosphere on long time scales (Davis 1976). However, considering that the large-scale geostrophic ocean circulation, including the transport and storage of heat, could be an important driver (Bryan et al. 1975), it followed that subsurface ocean temperature and salinity observations over broad areas of the ocean were needed.
For that purpose, research vessels were not numerous enough to provide the needed areal coverage, but XBTs deployed by commercial ships held much promise. Mechanical bathythermographs (MBTs) had been developed by A. Spilhaus and used for military purposes before, during, and after World War II (Shor 1978). Although MBTs were cumbersome, requiring a lightweight winch, and inaccurate, they were nevertheless quick and inexpensive and did not require a research vessel. Nearly 200 000 temperature profiles were collected using MBTs (Fig. 3-4) between 1938 and 1948. The XBT followed in the 1960s (Snodgrass 1966); its system of two spools of very light insulated wire paying out simultaneously from the sinking probe and along the sea surface from a shipboard plastic canister freed the instrument from winch operation and allowed it to be deployed, without slowing, from any sort of vessel.
In addition to continuing military interest, the research community found valuable opportunity in XBT technology, applying it to design and implement XBT networks measuring subsurface temperature to depths of a few hundred meters along widespread commercial shipping routes (White and Bernstein 1979). The number, frequency, and variability of shipping routes made it possible to visualize ocean variability over large areas, rather than being confined to sampling along widely separated transects by research vessels. This activity began in the North Pacific (White and Bernstein 1979), where the data proved valuable in describing a range of oceanographic phenomena from mesoscale eddies to decadal climate variability. Subsequently, the XBT network design was extended to the Indian Ocean (Phillips et al. 1990), the tropical Pacific (Meyers et al. 1991), the eastern Pacific (Sprintall and Meyers 1991), the Atlantic (Festa and Molinari 1992), and globally from 30°S to 60°N (White 1995). By 2000, over 2 million XBT profiles had been collected worldwide (Fig. 3-4). Notable shortcomings of the XBT networks are apparent in Fig. 3-4, particularly in the Southern Ocean and the broad interiors of the South Pacific and south Indian Ocean, where there simply was never enough shipping traffic for regular sampling.
The early ideas regarding the need for systematic collection of subsurface ocean data were reinforced during the 1970s–1990s. During the 1970s, early satellite datasets including SST revealed large variability on a wide range of spatial and temporal scales both regionally and globally. The limitations of the XBT networks in sampling patterns of subseasonal to interannual variability in the subsurface ocean were made apparent by comparison with global satellite coverage. The importance of the lack of regular sampling became apparent in late 1982, when the “El Niño of the Century” (Cane 1983) went undetected until it began to cause havoc through high tides, storm-driven surf, and flooding rainfall along the west coast of North America. The result was installation of a permanent tropical Pacific observing system as part of the TOGA project, including moored buoys, XBTs, surface drifters, and sea level gauges [McPhaden et al. (1998) and section 7], to ensure that the surprise arrival of El Niño would not be repeated. Another important milestone during this period was the WOCE of 1991–97, which obtained a single global survey of ocean properties and many repeating transects, placing a strong focus on the ocean’s roles in the climate system.
The development and widespread deployment of modern surface drifters was stimulated by the scientific/observational needs of WOCE and TOGA (McPhaden et al. 1998). Surface drifters had the dual use of providing calibration data for sea surface temperature measurements made by satellites while also directly measuring the surface velocity field. Several designs of surface drifters evolved with differing water-following characteristics and endurance. By the end of WOCE and TOGA, over 700 surface drifters were spread around the global ocean, with about one-third in the tropical Pacific. The Global Drifter Program continues today, with 1453 active drifters, some measuring barometric pressure and sea surface salinity as well as sea surface temperature, and most now transmitting through the Iridium cellular network rather than the slower, unidirectional system Argos.
Just as MBT technology evolved gradually into the broadscale XBT networks, another thread of technological progress underpinning modern global observing began with John Swallow’s use of neutrally buoyant floats for tracking subsurface ocean circulation (Swallow 1955). Swallow used aluminum tubing scavenged from construction scaffolding to build instruments, containing a sound source, that were carefully ballasted to be neutrally buoyant at a prescribed depth (Gould 2005). A research vessel with hydrophones mounted underwater fore and aft was able to measure the azimuth angle of the emitted sounds and, by steaming around the floats, to estimate their positions. Early “Swallow floats” were responsible for several scientifically important findings, including confirming the existence of a Deep Western Boundary Current in the North Atlantic (Swallow and Worthington 1957).
In spite of the exciting findings, the cumbersome use of research vessels for short-range acoustic tracking limited the deployment of Swallow floats. This problem was overcome by using long-range acoustic transmissions from neutrally buoyant floats, initially tracked by government hydrophone networks via the SOFAR channel (Rossby and Webb 1970, 1971), following an earlier suggestion by Stommel (1955). SOFAR floats were very successful but still rather awkward, having large resonant cavities (like organ pipes) and high energy requirements, both required for long tracking range. A more efficient approach was taken by switching source and receiver [hence termed RAFOS by Rossby et al. (1986)]. For triangulation of float position, a small array of moorings with relatively large sources made regular transmissions. The floats recorded arrival times of transmissions to be telemetered ashore at the end of the float mission. Practical considerations, mainly associated with the moored sound sources, still limited this technology to regional deployment.
Davis et al. (1992) replaced the acoustic tracking with satellite location systems by adding a buoyancy pump so the float could return to the sea surface periodically. The original Autonomous Lagrangian Circulation Explorer (ALACE) float’s mission was to measure and explore middepth velocity from the float’s trajectory (Davis 1998), and to provide a “level of known motion” for WOCE hydrographic transects. During WOCE, the addition of a profiling CTD to this satellite-tracked float, and the deployment of over 1200 floats around the world (Davis et al. 2001), provided a demonstration of the global potential of what would become the Argo profiling float, as well as pointing toward further technology advances. After the successful use of Profiling ALACE (PALACE) floats in WOCE, Scripps and Webb Research Corporation each developed second-generation floats with improved buoyancy engines and 2000-m depth ratings.
By the late 1990s, physical oceanographers around the world had participated in the WOCE global survey and had become familiar with the new technology of profiling floats. Opportunity beckoned to implement a global array that might carry out the equivalent of a WOCE hydrographic survey not every 10 years, but rather every 10 days. A design for the global array, Argo (Argo Science Team 1998), consisting of 3300 profiling floats distributed at 3° × 3° spacing, was endorsed by the WCRP’s Climate Variability and Predictability (CLIVAR) project and by the Global Ocean Data Assimilation Experiment (GODAE). Critically for its development, Argo was both a multinational scientific collaboration and a multinational agency initiative (NASEM 2017). The first Argo floats were deployed by Australia in 1999 and, in 2007, the 3000-float threshold was surpassed. Today’s Argo array, with over 3800 floats profiling to 2000 m every 10 days, is remarkably similar to the conceptual Argo proposed 20 years ago (Fig. 3-5). The Argo Program has been coordinated by the Argo Steering Team, initially called the Argo Science Team, since the 1998 beginnings. Although Argo national programs typically have strong regional interests, all programs have agreed that maintaining the global array is Argo’s highest priority and that a portion of their contribution will be devoted to global coverage.
The Argo Program has important synergies with Earth-observing satellites and with in situ observing networks. Argo’s name derives through Greek mythology from the Jason series of satellite altimeters, each measuring sea surface height (SSH) while Argo observes the subsurface changes in density that constitute the steric component of SSH variations. Other satellite datasets related to Argo include those for wind stress, sea surface temperature, sea surface salinity, and gravity (for ocean mass variations). Among in situ observations, the GO-SHIP repeat hydrography program [Talley et al. (2016) and section 2] is perhaps the most closely related and complementary to Argo. GO-SHIP provides state-of-the-art reference data that are critical for detection of drift in the less accurate Argo sensors. In turn, Argo samples a broad range of spatial and temporal scales that are not seen by the sparse decadal hydrographic lines.
Other sustained in situ networks that complement Argo include the global surface drifter network, moored observations in the boundary current regions and tropical oceans, and the modernized XBT networks. The XBT networks have evolved away from the broadscale area-sampling niche that is now occupied by Argo, with its more spatially complete, deeper, and more accurate measurements. Instead, the XBT network was reconfigured toward line modes of sampling (high spatial resolution lines and frequently repeating lines; Goni et al. 2010), for example, providing sampling on short scales across boundary currents that are not resolved by Argo (Zilberman et al. 2018). The modern ocean observing system integrates all of these satellite and in situ observing system elements in order to span as great a range as is practical of the temporal and spatial scales of ocean variability.
Since surpassing the 3000-float plateau in 2007, the Argo array has maintained its global coverage for more than a decade, having obtained 1.8 million profiles by early 2018 (Fig. 3-4) and extending spatial coverage into seasonal ice zones and marginal seas. The lifetime of floats has been extended from about 3 years initially to more than 5 years, mitigating effects of inflation and flat budgets. An ongoing transition to bidirectional Iridium communications, by reducing surface times from 10 h to 20 min, has minimized array clumping, spreading, grounding losses, and biofouling. Floats can receive changes to their cycle times, drifting or profiling depths, and other mission parameters to enable new applications to be developed. All Argo data are publicly available without cost, and 90% of profiles are available for download within 24 h of collection at either of two Argo Global Data Assembly Centers.
The Argo Data Management System has broken new ground through its extensive documentation of float metadata and development of delayed-mode quality control procedures (Owens and Wong 2009), applied consistently across the array to deliver high-quality data for research. The JCOMMOPS (Joint Technical Commission for Oceanography and Marine Meteorology in situ Observing Programmes Support Center) Argo Information Center provides tools for float tracking and sorting among all Argo floats, delivering an evolving picture of Argo’s global status and progress. The Argo Steering Team and the Argo Data Management Team work closely together for operational coordination, technology improvement, troubleshooting, data quality control, and data delivery. Through its international framework, and due to the willingness and cooperative spirit of the Argo National Programs, Argo is the most internationally collaborative effort in the history of oceanography.
Argo continues to make progress toward complete coverage of the global upper ocean. Even in its present state, with sampling gaps in the high-latitude oceans and some marginal seas, the transformative value of Argo is apparent. Argo’s bibliography includes over 3000 research papers and 250 PhD theses, addressing a broad range of topics (Riser et al. 2016). Nevertheless, the present Argo domain of 0–2000-m depth includes only half of the ocean volume. Argo’s temperature–salinity–pressure sensors leave unaddressed many questions about the ocean’s biogeochemistry and ecosystem variability. To address the depth limitation, Deep Argo (Johnson et al. 2015; Zilberman and Maze 2015) is extending sampling to the ocean bottom. This is necessary to observe interannual to multidecadal variability and trends in the deep sea, and to close planetary budgets of heat and freshwater. Deep Argo will measure the component of sea level variability and rise due to changes in ocean density, and it will observe the full-depth ocean circulation. Deep Argo will provide critical datasets for initializing ocean forecast models and reanalyses.
BGC Argo (Biogeochemical-Argo Planning Group 2016), by installing additional sensors for oxygen, pH, nitrate, and bio-optical properties, is improving understanding of fundamental biogeochemical cycling in the ocean, which is the foundation of biological productivity and carbon cycling. Both BGC and Deep Argo are formally elements of the international Argo Program and in both cases regional pilot arrays, totaling about a hundred floats that have been deployed over several years. These deployments have demonstrated the technical readiness and scientific value of the Argo enhancements as a step toward global implementation. Ongoing issues for both CTD and BGC sensors include their progress toward targets for accuracy and stability and the need for high-quality shipboard reference data that are used for sensor validation.
4. Underwater gliders
Underwater gliders are a successor technology to profiling floats in that they profile vertically by changing buoyancy, but have the ability also to move horizontally. Regier and Stommel (1979) briefly discussed adding maneuverability to SOFAR floats. In a visionary article, Stommel (1989) described a fleet of ocean gliders that would occupy the global sections carried out during WOCE, directed from a mission control center. These gliders were meant to navigate autonomously, taking hydrographic data and reporting it back by satellite when at the surface.
By the 1990s, profiling float buoyancy control, self-contained CTDs, GPS, and two-way satellite communications were technologies that enabled two competing glider development efforts. One was collaborative between Scripps Institution of Oceanography (SIO), Woods Hole Oceanographic Institution (WHOI), and Webb Research; the other was by the University of Washington and led to the Seaglider. Differences in design approaches led the first effort to split into developments of the deep ocean Spray glider by SIO and WHOI and the shallow water (200 m) Slocum glider by Webb Research. Detailed descriptions of these gliders can be found in Davis et al. (2003), Rudnick et al. (2004), and the review of research using gliders by Rudnick et al. (2016). None of these gliders has the 5-year duration envisioned by Stommel, but all became effective on missions of several months, particularly near ocean boundaries, even through strong currents such as the Gulf Stream.
Like profiling floats, gliders descend and ascend by changing their volume and move horizontally, on both ascent and descent, by orienting their wings to produce horizontal movement. Today’s gliders have similar physical and performance specifications largely because their designers sought limited construction and operation costs. They can be carried by two people and deployed from small vessels, even 6-m rigid-hull inflatable boats. Practical limits to buoyancy change and a maximum size for handling led to designs of comparable size (2-m length) and mass (~50 kg). The energy to travel a given straight-line distance increases approximately as velocity squared, and a minimum velocity is needed to exceed ambient currents. Thus, the goal of ranges of thousands of kilometers led to nominal horizontal velocities of about 0.25 m s−1, or 20 km day−1. A typical dive to a depth of 1 km and back is made with a glide angle near 20° in about 5 h. Glider range depends on operating speed, sensor complement, and sensor operation. A Spray equipped with a pumped CTD typically travels about 2500 km over 4 months.
A unique product of gliders is the depth-average water velocity, used both for navigation and for scientific purposes. Depth-average water velocity is calculated from the difference between glider velocity over the ground and through water (Rudnick et al. 2018). This depth-average velocity is the set experienced by the glider and is essential to navigation between waypoints or across strong currents. Depth-average velocity is also a key scientific product, allowing estimation of transport through a glider section. CTD profiles from successive dives measure the cross-track geostrophic shear that can be referenced from dive-average water velocity to find the absolute cross-track velocity profile. In addition, an onboard ADCP (Todd et al. 2017) directly measures velocity shear, which can be referenced to the depth-average velocity to yield absolute water velocity.
Underwater gliders have proven particularly useful in sustained observation of boundary currents. They profile continuously to control position, naturally producing data with fine horizontal resolution. Operational costs are minimized by using small boats to deploy and recover close to land. Thus, gliders are especially suited to the sustained observation of boundary currents, as they are close to land and require good horizontal resolution. Examples of sustained glider surveillance are found off California, in the Solomon Sea, and in the Gulf Stream, as summarized below.
The longest continuous glider observations are from the California Underwater Glider Network (CUGN) and have been sustained (Rudnick et al. 2017) for over a decade since its beginning in 2006 (Davis et al. 2008). The CUGN operates gliders along three of the traditional California Cooperative Oceanic Fisheries Investigations (CalCOFI) lines off Dana Point (line 90.0), Point Conception (line 80.0), and Monterey Bay (line 66.7). With an overarching goal of observing the regional effects of climate variability, the CUGN has covered the 2009/10 El Niño (Todd et al. 2011), the North Pacific marine heat wave of 2014/15 (Zaba and Rudnick 2016), and the 2015/16 El Niño (Rudnick et al. 2017).
The CUGN produces the SoCal temperature index, the temperature at 50-m depth averaged over the inshore 200 km of line 90. It was strongly correlated with sea surface temperature in the equatorial Pacific before 2014, but this relation broke down at the start of the North Pacific marine heat wave (see Fig. 3-6) that corresponded with an arrested El Niño on the equator. The extreme temperatures of the 2015/16 El Niño were followed by a return to normal conditions at the equator while California waters remained anomalously warm. Description of this marine heat wave shows the payoff of sustained subsurface sampling, which can be maintained only with cost-effective sampling.
Another long glider-based time series has been maintained across the Solomon Sea since 2007 (Davis et al. 2012). Flow through the Solomon Sea is the western boundary current of the South Pacific’s tropical gyre and carries water masses from the subtropical South Pacific to the equatorial band where intense air–sea interaction can amplify its impact. Solomon Sea transport is a substantial fraction of the total flow into the equatorial warm pool, and with large-amplitude interannual variability, it can be suspected of influencing equatorial climate variability. The most important goal of glider sampling in the Solomon Sea is to describe the heat impact of this Low Latitude Western Boundary Current (LLWBC) on the heat budget of the equatorial warm pool as it affects the overlying atmosphere.
Glider transects of the southern Solomon Sea show a shallow flow from the east entering the sea near its middle, and then joining and flowing over a deeper western boundary current (WBC) from the Coral Sea to form a two-layer WBC. Both layers exhibit quasi-annual and ENSO-related variability and transport fluctuations in the upper 700 m. This LLWBC is part of the mass and heat exchange between the subtropics and the equator that constitutes the big picture of ENSO, but like many other aspects of ENSO, its relation to Solomon Sea transport varies between events. Shallow-layer mass transport, plotted in Fig. 3-7, is well correlated with AVISO (Archiving, Validation, and Interpretation of Satellite Oceanographic Data) sea level height differences across the sea, and with the variations of geostrophic flow seen between a pair of moorings spanning the Sea.
The Solomon Sea is remote, with a primitive infrastructure at the end of an expensive transportation route to the United States. It serves as a demanding test of the ability to sustain gliders in a remote site. Ultimately, success depends on having efficient on-site vehicle preparation, good communication with home, and involving local residents. Gliders are well suited to a society that works in small boats and knows the sea, so islanders do much of the at-sea work and help prepare gliders; all told, a Solomon Sea operation is little more expensive than its U.S. equivalent.
The Gulf Stream, which is stronger and deeper than the Solomon Sea WBCs, was first crossed by a glider in fall 2004 when a Spray crossed between Woods Hole (Massachusetts) and Bermuda. Combining this transect with other subsequent crossings of the Gulf Stream and Loop Current in the Gulf of Mexico (Rudnick et al. 2015; Todd et al. 2016) described the vertical and cross-stream structure of potential vorticity and its change between two locations along the North Atlantic’s western boundary current.
Gliders are now being deployed into the Florida Current off Miami to occupy transects across the Gulf Stream as they are advected downstream to Cape Hatteras (Fig. 3-8). These sections describe the downstream changes in the structure of the Gulf Stream (Fig. 3-9). Variations in the vertical speed of the gliders and shorter-scale variations in observed property profiles have been used to identify large internal waves associated with strong flow over topography (Todd 2017). These observations demonstrate the capabilities of gliders to operate in strong WBCs, providing real-time observations of the strong shears and property gradients that make WBCs unique. Even in these conditions, piloting the gliders involves at most a single command per dive (~6 h) that can usually be generated algorithmically.
While early gliders carried only CTDs, they now carry many sensors adapted to them. These include chemical (nitrate, oxygen, pH), bio-optical (fluorescence, optical backscatter), and acoustic sensors (backscatter, ADCP, whale tracking) and even a plankton camera. There are diverse mission types, including local and broad-area coastal time series (Ohman et al. 2013), specific regional experiments (Ramp et al. 2009), and multiyear surveillance of key ocean regions (Rudnick et al. 2017). A short, incomplete list of process studies using gliders includes the Salinity Processes in the Upper Ocean Regional Study (SPURS) investigation of air–sea interactions in the subtropical North Atlantic (Lindstrom et al. 2017), the North Atlantic bloom experiment (Mahadevan et al. 2012), eddy studies in the Gulf of Mexico’s Loop Current (Rudnick et al. 2015), and a study of isopycnal stirring and diffusivity in the North Pacific subtropical gyre (Cole and Rudnick 2012).
Underwater gliders may be especially well suited for observing polar regions, where their multimonth endurance, ability to control position, and ability to profile to the ice–ocean interface allow sampling in these difficult environments. Gliders operating under ice incorporate enhanced autonomy to operate for extended periods without human intervention and determine their position by multilateration from an array of acoustic beacons (Webster et al. 2014). Seagliders using acoustic navigation to operate under sea ice collected 6 years of data to quantify fluxes through the Davis Strait (Curry et al. 2014). Gliders have bridged open water, through partial ice cover into pack ice in the Beaufort Sea marginal ice zone (Lee et al. 2017), and occupied sections under the Dotson ice shelf in the western Antarctic (Lee et al. 2018). With increased interest in ice–ocean interactions, the use of gliders in Polar Regions is likely to grow.
5. Evolution of ocean observing using moored instrumentation
Ocean observing during the first half of the twentieth century principally involved lowering and/or suspending instruments from ships. To move beyond these limited-duration measurements, work began midcentury to develop long-duration oceanographic moorings. Bill Richardson led an effort in the late 1950s to establish a line of moored stations between Woods Hole and Bermuda from station A on the continental shelf to L in the Sargasso Sea. A fiberglass toroid, 3.3 m in diameter, was the surface buoy; a railroad wheel was the anchor; and a polypropylene or nylon mooring line connected the buoy and anchor with sensors in between. The current meters had Savonius rotors and vanes and recorded data on 16-mm movie film. The duration Richardson hoped for was not attained. Moorings typically lasted on station a month or less, and the Savonius rotor current meters performed poorly under surface buoys.
In 1963, Nick Fofonoff and Ferris Webster replaced Richardson leading the WHOI Buoy Project, and they began an engineering program to diagnose failures and test new approaches on the continental slope southeast of Woods Hole. A significant source of mooring line failure was fish biting the line, typically in the upper ocean, so plastic jacketed wire rope was put in service above ~2000 m. At the same time, development of subsurface mooring technology began. This class of mooring, with all flotation elements below the surface, is subject to lighter dynamic loads from surface waves and winds than are surface moorings. Acoustic releases were developed that are placed near the moorings above the anchor and are commanded acoustically to release the anchor for mooring recovery. Improved reliability and endurance resulted, at the expense of observations near the surface, yet challenges continued. A story often retold at WHOI recounts a cruise in August 1967 that set sail to service the Woods Hole to Bermuda mooring line. When they discovered that all the deployed moorings had been lost, they decided not to deploy any of the replacement moorings. Moored observations by John Swallow and colleagues at the National Institute of Oceanography (NIO) in the United Kingdom were also undertaken in the 1960s. John Crease deployed moored current meters in the Faroe Shetland channel in 1966, and Swallow set current-meter moorings southeast of Madeira the same year. Data return from these deployments was low. Swallow joined Val Worthington of WHOI on a 1967 cruise to recover WHOI moorings set in Denmark Strait; only 10 of the 30 deployed current meters were recovered, and only one of these provided usable data.
Despite disappointments, engineering work slowly improved current meters, acoustic releases, mooring design, and the manufacture of subsurface moorings. By the 1970s, major oceanographic programs such as the Mid-Ocean Dynamics Experiment (MODE) and POLYMODE (MODE Group 1978; Collins and Heinmiller 1989) used moorings as a fundamental observing tool. Distributing flotation along the mooring line improved reliability and controlled “blow down” of moorings by currents. Hollow 0.4-m glass spheres were encased in plastic covers and bolted to a chain along the mooring.
Expertise to build and deploy moorings was developed at other institutions around the world. In parallel, work to improve the reliability of oceanographic surface moorings was renewed in the 1970s, as summarized in section 6. By the mid-1980s, surface moorings had joined subsurface moorings as standard oceanographic observing platforms. Subsurface moorings are now routinely deployed for 2-year intervals and some have been on station for 5 years. Surface moorings that experience more wear and biofouling are typically recovered and replaced on an annual basis. Surface moorings today are both of taut- and slack-wire design; subsurface moorings support a distribution of fixed-depth sensors and/or moving instrument platforms (Fig. 3-10).
The need for real-time ocean information motivated development of data telemetry from meteorological moorings. Initial work on satellite data transmission utilized the Argos system established in 1978 (https://en.wikipedia.org/wiki/Argos_system#References). By the early 2000s, the subsequent Iridium satellite system (https://www.iridium.com/) was providing higher data-flow rates and two-way communication capability. Data telemetry from subsurface instruments required added data to link up the mooring to the surface. Acoustic (Freitag et al. 2005) data links, inductive (Fougere et al. 1991) data links using plastic-jacketed mooring wire as a conductor, and electrical cables have all been used. Lacking a surface expression, data telemetry from subsurface moorings is more difficult. Researchers have worked on systems that utilize expendable, buoyant data pods periodically released from the mooring (e.g., Frye et al. 2002). More recently, gliders programmed to operate around subsurface moorings have been utilized to ferry data between subsurface instruments and the air–sea interface.
a. Moored instrumentation
Historically, a focus for moored instrumentation was measurement of ocean current at single points in space. For much of the twentieth century, current meters sensed current speed and direction separately using a variety of vanes, propellers, and rotors. For a review, see Dickey et al. (1998), Williams et al. (2009), and references therein. Early mechanical current meters, such as the Ekman current meter that dropped balls into a binned receiver, were succeeded by instruments that recorded on film, giving improved temporal information. Analog and then low-power digital tape recorders were developed in the 1960s and 1970s and then succeeded by solid-state recording. The Aanderaa RCM4 Savonious rotor current meter (Dahl 1969), developed in the mid-1960s, recorded temperature and, optionally, pressure. The vector-averaging current meter (VACM; McCullough 1975; Beardsley 1987) that came on scene shortly thereafter was a technical advance. Rather than average speed and direction for recording, it computed and averaged east and north velocity components and recorded them. The vector measuring current meter (VMCM), developed in the late 1970s, met the need for a current meter that performed better on surface moorings where waves and mooring heave biased rotor and vane sensors. The VMCM’s two orthogonally mounted propellers responded primarily to the vector velocity (Weller and Davis 1980).
Temperature sensing was added to many current meters and stand-alone temperature recorders that became available in the 1990s. This was soon followed by moored instruments that measured temperature, conductivity, and pressure, allowing salinity to be observed from moorings. In recent years, multidisciplinary sensors have been developed for use as a moored instrument. For example, the Multi-Variable Moored System (MVMS) was an enhancement of the VMCM in the early 1990s by Dickey and colleagues to incorporate a beam transmissometer, fluorometer, scalar irradiance sensors [photosynthetically active radiation (PAR)], and dissolved oxygen sensors.
Mechanical current meters are challenged by biofouling and entanglement by fishing lines and have complex response characteristics (rotor stiction being one). These issues led engineers to develop current meters without moving parts. Several single-point current sensing technologies were explored over the last 40 years, including electromagnetic, differential acoustic travel time and acoustic Doppler. The first of these senses the voltage induced by flow of conducting seawater through an applied magnetic field. Flow distortion by the current meter itself limits the accuracy of this technique. Error in acoustic travel time devices result if eddies, shed from the current meter body or transducer mounts, enter the acoustic paths. In single-point Doppler devices, the sample volume is remote from the electronics case, typically O(1) m from the housing, and is thus free from flow distortion. Downsides of this technology include reliance on acoustic scatterers in the water (that are assumed to move with the water), larger uncertainty in individual measurements (requiring averaging of multiple samples to reduce uncertainty), and greater energy requirements as compared to a travel time sensor.
The related ADCP returns profiles of ocean velocity by measuring acoustic backscatter frequency from multiple range bins. Spiess and Pinkel developed a large, long-range ADCP mounted on SIO’s research platform FLIP (Floating Instrument Platform). Based on Cox’s suggestion that a smaller ADCP might be developed by range-gating existing ships’ logs, Davis and Regier (SIO) and Rowe and Deines [Rowe–Deines Instruments (RDI)] developed both shipboard and moored ADCPs. Moored ADCPs sample currents at many depths, replacing several single-point current meters and eliminating false shears stemming from compass and velocity calibration errors at different levels. The distance between acoustic beams sensing different flow directions introduces errors at small space and time scales (e.g., internal waves; Polzin et al. 2002). ADCPs are now available at a variety of different frequencies, with differing ranges and resolutions, and with different acoustic beam configurations.
Conventional ocean moorings, whether surface or subsurface, support discrete sensors distributed vertically along the mooring line. As noted above, multiple discrete sensors can report shears that, in fact, come from calibration errors in neighboring sensors. These are removed in the alternate approach of a movable platform transporting a single sensor suite vertically through the water column. The Moored Profiler (Doherty et al. 1999), as an example, employs a traction drive to crawl repeatedly up and down a conventional mooring wire. Other profilers use buoyancy changes to ascend and descend (e.g., Eriksen et al. 1982; Provost and du Chaffaut 1996) or combine a buoyant instrument package that floats up and a winch mounted on top of a subsurface mooring to haul it back down (e.g., Barnard et al. 2010; Send et al. 2013). Other sensor carriers attach to the mooring below a surface float and use a ratcheting drive to tap heave from surface waves to crawl down the mooring line and then release from the line, float up, and lock back on to repeat the sequence (e.g., Fowler et al. 1997; Pinkel et al. 2011). Each profiling system has strengths and weaknesses, but a failed instrument platform causes loss of all observations. Profiling speeds also limit the temporal sampling resolution. Best practice has been shown to utilize moored profiling technologies in combination with discrete fixed-depth sensors.
b. Moorings and moored arrays
Changes to the vertical structure of ocean currents and properties are tracked by single moorings with a vertical line of sensors. Ocean Reference Stations, organized by OceanSITES of CLIVAR (Climate and Ocean: Variability, Predictability and Change) and by the Ocean Observatories Initiative (http://oceanobservatories.org/), are such single moorings with a vertical array of sensors. These reference time series at key locations quantify air–sea exchanges of heat and momentum as well as upper ocean storage of heat, salt, and momentum. In turn, reference time series anchor large-scale fields of oceanic surface fluxes to assess climate variability in the ocean and atmosphere, and to assess/improve climate models and validate/calibrate remote sensing of the sea surface (Weller and Plueddeman 2006).
In other situations, linear arrays (lines of instrumented moorings) produce 2D arrays of sensors to observe vertical and horizontal variations or to document net deep-water flow through passages into semi-enclosed abyssal basins. Restricted widths of such passages allow finite numbers of moorings to form coherent arrays in which fluctuations at each mooring pair are coherent, yielding accurate estimates of spatially integrated velocity (net transport). Examples include Vema Channel and the Samoan Passage (Hogg et al. 1982; Zenk and Hogg 1996; Roemmich et al. 1996), and the deep gap between the Broken and Naturaliste Plateaus in the Indian Ocean (Sloyan 2006). Applying abyssal transport estimates to control volumes bounded by specific deep isopycnal/isothermal surfaces and the sea floor provide bounds on the intensity of abyssal mixing and net diapycnal/diathermal flow (Morris et al. 2001).
Upper ocean flow through restricted passages has also been documented using linear arrays. Examples include interbasin exchanges between the North Pacific and Arctic via Bering Strait (Woodgate 2018); between the Arctic and North Atlantic through Fram and Denmark Straits (Tsubouchi et al. 2018; Jochumsen et al. 2012; Harden et al. 2016); and between the Pacific and Indian Oceans through passages in the Indonesian archipelago and through the Mozambique Channel (Gordon et al. 2010; Ridderinkhof et al. 2010).
Western boundary currents, the strongest flows in the deep ocean, have been a focus of mariners and oceanographers since before Benjamin Franklin (Richardson 1980). Moored western boundary current arrays are now central to measuring total basin transport, a key metric of the ocean’s role in climate. Table 3-1 describes some of the programs that used moored linear arrays to sample western boundary currents. Net transports can also be measured with a seafloor cable spanning the full width of a passage. The cable allows measurement of the potential induced by flow of conducting seawater through Earth’s magnetic field, which in certain cases can be directly related to the current’s transport. Notably, a cable-based multidecadal time series of Florida Current transport exists (http://www.aoml.noaa.gov/phod/floridacurrent/index.php) between Florida and the Bahamas. Key elements of these measurements are shipboard observations of velocity and stratification across the Strait that provide cable calibration information.
Western boundary currents are central to measuring total transports through basins, a central metric of the ocean’s role in climate. Moored linear arrays have sampled western boundary currents in the Atlantic (southeast Labrador Sea; Zantopp et al. 2017), southeast of Woods Hole (Toole et al. 2011, 2017), near the equator (Schott et al. 2005), east of Brazil (PREFACE-SACUS; https://preface.w.uib.no/), in the Pacific’s Kuroshio (Imawaki et al. 2001; Book et al. 2002), in the East Australian Current (Sloyan et al. 2016), in the Indian Ocean (East Madagascar Current; Ponsoni et al. 2016), and in the Agulhas Current (Beal et al. 2015).
In each case, these finite-width arrays were terminated in midbasin, leaving uncounted offshore flow, both unobserved elements of the mean circulation, and partially resolved mesoscale eddies sitting at the array’s end. To address this shortcoming, a few programs have attempted to document net coast-to-coast ocean transport with arrays that span the basin. An early example was the International Southern Ocean Studies observation of flow through Drake Passage (Neal and Nowlin 1979). Despite nearly 20 moorings across the passage, recent analysis of repeated shipboard velocity sections suggests that the Antarctic Circumpolar Current transport had been undersampled (Firing et al. 2011). Rather than using an eddy-resolving mooring array across the ocean, the Rapid Climate Change–Meridional Overturning Circulation and Heatflux Array (RAPID–MOCHA) observation across the Atlantic at ~26°N (McCarthy et al. 2015) combined boundary current arrays and the Florida Strait cable with “dynamic-height moorings” (Johns et al. 2005) in the ocean interior. Geostrophic thermal wind balance combined with estimates of wind-driven Ekman flow and constraints on the net meridional transport were used to infer the time-varying overturning circulation. Relating horizontally averaged velocity to horizontal dynamic pressure differences using geostrophy complicates partitioning transport estimates by water mass class, but relatively small variations of isopycnals depth and water properties on those surfaces at 26°N in the Atlantic allow reasonable estimates to be made.
In contrast, the subpolar North Atlantic has strong horizontal water property gradients. The OSNAP (Overturning in the Subpolar North Atlantic Program; Lozier et al. 2017) array thus employs a mix of moorings to observe meridional transport, including high-resolution linear boundary arrays and dynamic height moorings (Fig. 3-11). All OSNAP moorings but one are subsurface, carrying single-point current meters and ADCPs, and temperature–conductivity recorders to observe transport of heat and freshwater. A single surface mooring just east of southern Greenland observes air–sea fluxes that drive deep convection and water-mass formation.
Two-dimensional moored arrays are frequently used to investigate ocean dynamics. The tropical moored arrays [Tropical Atmosphere Ocean (TAO), section 7; Prediction and Research Moored Array in the Tropical Atlantic (PIRATA), https://www.pmel.noaa.gov/gtmba/pirata; and Research Moored Array for African-Asian-Australian Monsoon Analysis and Prediction (RAMA), https://www.pmel.noaa.gov/gtmba/pmel-theme/indian-ocean-rama] have both latitudinal and longitudinal extent and span the tropical belts of the three main ocean basins. Their large east–west spacing resolves the long zonal scales of equatorially trapped waves. Small correlation scales at mid- and high-latitude challenge arrays seeking to resolve mesoscale eddy motions. The first such study, MODE in 1973 (MODE Group 1978), was motivated by Swallow and Crease's middepth float observations during the R/V Aries expedition that described eddy motions in the ocean interior, not the slow poleward drift predicted by abyssal circulation theory (Stommel 1958).
Moored observation has also been extended to small spatial scales of O(1–50) m. For example, Weller and colleagues studied internal tides over Stellwagen Bank using a linear mooring array with 10-m vertical spacing and horizontal spacing of 30 m (Fig. 3-12; Grosenbaugh et al. 2002). Later, shallow water variability was studied with a 3D array consisting of a floating 20 m × 20 m 2D mesh with 1-m meter spacing and vertical sensor arrays hung from selected nodes (Fig. 3-13). These 3D sensor arrays have been deployed successfully, but challenge users to field enough sensors to exploit them; vertical strings of 20 sensors at every node would require 8000 sensors. Low-cost sensors networked to share power and data recording are needed.
6. Evolution of observing surface meteorology and air–sea fluxes
Observing how the ocean is forced by, and forces, the atmosphere has been an important goal for oceanographers. Over the last 100 years, surface meteorology and air–sea flux estimates have been based on observations from merchant ships and surface buoys. Products of gridded surface meteorology and air–sea fluxes have been developed from observations, numerical model outputs, satellite observations, and combinations of all these.
a. Shipboard observing
In 1842, Matthew Fontaine Murray, serving at the Depot of Charts and Instruments, provided logbooks to ships’ captains for them to collect surface wind and current observations. He then developed Pilot Charts, and maps of the winds over the ocean were published in 1848. Ships at sea continued to observe surface meteorology over the next century using barometers and wet- and dry-bulb thermometers and measuring ocean temperature sampled with buckets or engine cooling water. These additional observations enabled maps of surface meteorology that were more complete and calculation of air–sea fluxes of heat and momentum.
The U.S. Navy Marine Climatic Atlas of the World, first issued in eight volumes in the 1950s, included maps of precipitation, air temperature, barometric pressure, cloud cover, humidity, tides, waves, sea surface temperature, cyclones, visibility, and wind. This atlas was updated and reissued in the 1970s. Using bulk formulae that parameterize air–sea fluxes in terms of basic observables, maps and analyses of air–sea fluxes were developed from the ship observations by Bunker (1976), Hastenrath and Lamb (1977, 1978, 1979a,b), Hellerman and Rosenstein (1983), Isemer and Hasse (1985, 1987), and others. In the early 1980s, the National Oceanic and Atmospheric Administration (NOAA) and the National Center for Atmospheric Research (NCAR) collaborated to collect ship data and develop the Comprehensive Ocean–Atmosphere Dataset (COADS). This was followed by efforts to recover additional historical data in the International Comprehensive Ocean–Atmosphere Dataset (ICOADS; Woodruff et al. 2005).
In an effort to maximize the value of ship of opportunity observations, Kent et al. (1993) looked closely at the instrumentation and observing practices of 45 ships in the North Atlantic to quantify biases and develop corrections. These were applied to observations used to compute the air–sea fluxes via the bulk formulae for the Southampton Oceanography Centre (SOC) air–sea flux climatology (Josey et al. 1998). The SOC climatology was updated by the National Oceanography Centre (NOC) flux climatology (Berry and Kent 2009), which included uncertainties. Oceanographic research vessels and polar supply ships are now equipped with more complete and better meteorological sensors than volunteer observing ships, and their data benefit from frequent sampling and scrutiny when used for research.
The bow mast on some research vessels includes fast response turbulence sensors to determine fluxes using direct covariance fluxes. Fairall at the NOAA Earth System Research Laboratory (ESRL) built (https://www.esrl.noaa.gov/psd/psd3/air-sea/oceanobs/) a system for direct covariance fluxes and mean meteorological observations to develop the bulk flux algorithms known as the COARE bulk formulae (Fairall et al. (1996a,b) following the Coupled Ocean–Atmosphere Response Experiment. Direct covariance flux observations are now widely used to validate and extend parameterizations used in bulk formulae (Fairall et al. 2003).
b. Ocean weather stations
With increasing shipping and the prospect of cross-ocean aircraft routes, plans were put in place in the 1920s to make weather observations from ships at fixed sites in the North Atlantic and North Pacific. The value of these observations increased during World War II and in the 1950s, as additional oceanographic observations, including temperature and salinity profiles and bottle sampling, began at some weather ships. OWS Bravo in the Labrador Sea collected temperature and salinity profiles and surface meteorology from 1964 to 1974. Surface meteorological observations provided surface forcing and bathythermograph profiles were taken at OWS N (eastern North Pacific, 1949–74) and P (Gulf of Alaska, 1949–81) every 3 h beginning in the 1940s. The availability of coincident, frequently sampled time series of surface forcing and upper ocean temperature and salinity structure from the OWS served as the foundation for developing one-dimensional ocean models (e.g., Denman 1973; Denman and Miyake 1973) and simulating upper ocean structure in different models (Martin 1985; Tang et al. 2006).
OWS Papa remains the site of active observing and research. As models and remote sensing improved, observations by dedicated weather ships were seen as insufficiently unique and too costly to be continued. To some extent, the capability of manned sampling of both the ocean and atmosphere at fixed sites was maintained by the Research Platform FLIP, a 108-m-long manned spar buoy launched in 1962. Towed horizontally, the lower buoy is flooded to make the spar vertical and a stable platform for coincident upper ocean, air–sea flux, and surface meteorological observations for periods of a month at a time. Analogous to model development using OWS data, observations from FLIP led to new insight into upper ocean dynamics and to the development for the PWP one-dimensional ocean model (Price et al. 1986).
Through the 1980s, oceanographic interest grew to go beyond local one-dimensional models and heat and freshwater budgets. Defining an open ocean volume where observations would be made, and closure of the heat budget attempted, was put forward as the Cage Experiment (Dobson et al. 1982). WOCE (WCRP 1986) set target accuracy goals: 20% for wind stress and 0.5–0.8 m s−1 for wind in up to 10 m s−1, with accuracy in wind speed to obtain sensible and latent heat fluxes to 10–15 W m−2. Surface heat fluxes accurate to within 10% of the monthly mean were sought. The TOGA program indicated the need for a net air–sea heat flux accurate to 10 W m−2.
c. Surface buoys
Surface moorings, with instrumented buoys and instrumentation along the mooring line, provide coincident observations of surface forcing and of upper ocean structure and variability. Like data collected by the OWS, these moored time series support investigation of air–sea interactions and upper ocean dynamics. Weather buoys have been deployed, most often near the coast, since the early 1950s. Their primary focus was collection of surface meteorological and wave data in support of weather forecasting and predictions for mariners. Given the challenge of making unattended observations, weather buoys typically observe wind speed and direction, barometric pressure, air and sea temperature, and humidity but do not observe incoming shortwave and longwave radiation or precipitation and thus do not provide the mean meteorological data needed to compute the air–sea fluxes of heat, freshwater, and momentum.
The 1970s saw development of more capable surface moorings to support investigation of air–sea interaction and upper ocean dynamics. The Mixed Layer Dynamics Experiment (MILE) conducted in the Gulf of Alaska near OWS Papa fielded two surface moorings (Davis et al. 1981a,b) and current meters (Weller and Davis 1980) developed to improve the accuracy of upper ocean current observations made from surface moorings. However, MILE relied on surface meteorology from nearby ships. An international Joint Air–Sea Interaction Experiment (JASIN) was conducted in 1972 and again in 1978 in the Rockall Trough west of Scotland. The 1978 field effort brought 14 ships and 3 aircraft together to observe air–sea interaction and upper-ocean dynamics (Pollard et al. 1983). The Seasat satellite provided surface wind stress, surface waves, and atmospheric water content. Mooring deployments included early deployments of surface moorings instrumented to collect upper ocean temperature and currents in addition to surface meteorology (Fig. 3-14).
Surface moorings matured as platforms to collect meteorological observations for bulk-formulae air–sea fluxes. Technical challenges for the meteorological observations included degradation of sensor from various natural sources, power limited by batteries, solar heating, and vandalism. The moorings had to resist stresses from ocean forces, fish biting the mooring line, corrosion, and vandalism. In the late 1970s and 1980s, Halpern and colleagues at the NOAA Pacific Marine Environmental Laboratory (PMEL) sought to observe and predict ENSO variability. In 1976 they deployed current meter moorings (Halpern 1987), and in 1980 followed with surface moorings providing real-time air and sea surface temperature (Halpern 1988). Wind velocity data were added to the transmissions from the equatorial moorings in 1986.
The WOCE and TOGA goals were difficult to achieve by adapting meteorological sensors used on land. Observing humidity, rainfall, and incoming shortwave and longwave radiation was particularly difficult. National Science Foundation funding helped improve meteorological and air–sea flux observations from buoys through the 1980s and 1990s. Incoming longwave radiation sensors were fielded in the mid-1990s, replacing parameterizations based on air and sea surface temperature and cloud cover. Precipitation observations increased after COARE.
Progress can be gauged by comparing uncertainties in the heat fluxes from different experiments. The Bunker atlas of fluxes is accompanied by estimated uncertainties in the four heat flux components (sensible, latent, net shortwave, and net longwave). For the field programs, all laboratory calibrations, comparisons with shipboard sensors, and intercomparisons between buoys were used to estimate uncertainties in the meteorological observations. These were propagated through the bulk formulae to estimate surface-flux uncertainties delivered by the surface buoys. Initial results (Fig. 3-15) were discouraging. Inaccuracies in the calibrations, lack of direct observations of incoming longwave radiation, instability in amplifying voltages from radiometer thermopile networks, and drift in humidity sensors were early problems. These made uncertainties larger than those found in the Bunker atlas did. A new meteorological instrument, first named IMET (Improved Meteorology) and then ASIMET (Air–Sea Interaction Meteorology; Hosom et al. 1995), and concerted work on the sensors resulted in significant improvement. The present observational accuracy of net surface heat flux from a surface buoy with an ASIMET system is 8 W m−2 (Colbo and Weller 2009).
Key to obtaining the air–sea fluxes from surface meteorological observations on buoys are bulk-formulae parameterized fluxes of momentum and sensible and latent heat. Comparisons with direct observations of these fluxes using fast response sensors were used by Fairall, Edson, and others to refine the bulk formulae (e.g., Fairall et al. 1996a,b; Edson et al. 1998). Field studies, in particular, looked to improve the bulk formulae under low (<3 m s−1) winds and under high wind with large waves, including those in strong cyclones. The U.K. High Wind Air–Sea Exchanges (HWASE) program was conducted at Ocean Weather Station Mike (66°N, 2°E). The work at OWS Mike included direct observations of CO2 flux (Prytherch et al. 2010).
d. Toward sustained observations of global air–sea fluxes
Progress in the observational capability summarized by Fig. 3-15 was accompanied by improved surface mooring reliability and oceanographic instruments for moorings, enabling coincident observations of surface forcing and upper ocean structure and motivating plans to deploy well-instrumented surface moorings. Tropical arrays (see next section) were deployed and extratropical sites identified. Over the years, such sites provided in situ flux observations to test surface fluxes from models and remote sensing and led to the present plans to obtain surface flux fields on a global basis. Lessons learned merging sensors from diverse platforms in TOGA COARE (Weller et al. 2004) emphasized the role of calibrations and in situ intercomparisons between surface buoys and well-equipped ships. One-to-several-day intercomparisons of old and new buoys using laboratory calibrations enabled assembly of long time series of surface meteorology and air–sea fluxes with known uncertainties.
Surface flux products derived from remote sensing methods now join gridded surface fluxes from atmospheric model reanalyses. Air–sea flux fields obtained by optimally blending remote sensing and models have been validated against the surface buoys. The surface fluxes of heat and momentum have been identified as Essential Ocean Variables (EOVs) to be routinely measured as part of the GOOS and as Essential Climate Variables (ECVs) to be routinely observed as part of the Global Climate Observing System (GCOS; Lindstrom et al. 2012; WMO 2016).
A sparse global array of surface moorings would be used to sample different regimes (e.g., tropical convection, midgyre midlatitude, and cold, dry higher latitude) to support production of the gridded model, remote sensing (e.g., Kato et al. 2013; Pinker et al. 2018), and hybrid global flux products (such as OA Flux, which is on a 1° grid; Yu and Weller 2007). Figure 3-16 shows two of the surface moorings now being deployed. One, called an ocean reference station, is battery powered and has redundant ASIMET systems for mean meteorology and a Direct Covariance Flux System for work on improving the bulk formulae. The second follows an approach taken recently in the Ocean Observatories Initiative to provide more power for high-volume real-time telemetry and hosting additional sensors. This surface mooring array is now partly implemented under the OceanSITES (http://www.oceansites.org) open-ocean time series component of the GOOS that includes extratropical sites and sites within the tropical moored arrays.
7. Observing ENSO and tropical ocean–atmosphere interactions
The Tropical Atmosphere Ocean (TAO) array (McPhaden et al. 1998), and its later incarnation as the TAO/TRITON array in partnership with the Triangle Trans-Ocean Buoy Network (TRITON), derive from a decade-long attempt to describe, understand, and predict seasonal climate variability associated with ENSO. ENSO arises through interactions between the ocean and the atmosphere in the tropical Pacific, mediated by surface wind and SST feedbacks. It is the strongest year-to-year fluctuation of the climate system and the greatest source of predictability on seasonal time scales. Its impacts on global patterns of weather variability (Yeh et al. 2018), marine and terrestrial ecosystems, and pelagic fisheries have far-reaching effects on human and natural systems (McPhaden et al. 2006). Drought, flooding, wildfires, and extreme events linked to ENSO variations can have major consequences for agricultural production, food security, power generation, public health, freshwater resources, and economic vitality in many parts of the globe.
The Norwegian-born meteorologist Jacob Bjerknes provided the conceptual framework for understanding ENSO in the 1960s (Bjerknes 1966, 1969). He identified the dynamical relationship between El Niño events, unusual warmings of the ocean along the west coast of South America, and the Southern Oscillation, the seesaw in atmospheric pressure between the Eastern and Western Hemispheres first described by Sir Gilbert Walker in the early twentieth century (Walker 1924; Walker and Bliss 1932). Using data from the International Geophysical Year, which coincided with a major El Niño in 1957/58, Bjerknes also realized that the entire tropical Pacific basin was involved in these periodic warmings, not just the coastal zone off western South America. Finally, he identified the far-field teleconnections from the tropical Pacific associated with El Niño that affected weather over North America and elsewhere.
It was from these beginnings that oceanographers in the 1970s began field programs in the tropical Pacific to understand the oceanic mechanisms involved in El Niño dynamics and to attempt to observe its basin-scale manifestations (McPhaden 2006). Klaus Wyrtki of the University of Hawaii pioneered the first El Niño–observing system by establishing a network of island and coastal tide gauge stations throughout the tropical Pacific to monitor seasonal to interannual variations in sea level and ocean surface geostrophic currents (Wyrtki 1974a,b, 1975a). Though sparse in geographical coverage, it was from this network and volunteer observing ship winds that Wyrtki formulated his seminal idea for the onset of El Niño (Wyrtki 1975b): warming in the eastern basin associated with El Niño was not caused by local winds, but by a prior strengthening of trade winds in the central Pacific, followed by their sudden collapse. This collapse then generated an eastward propagating downwelling equatorial Kelvin wave that depressed the thermocline in the east, causing SSTs to rise abnormally high.
The concept of an oceanic equatorial Kelvin wave was still in the realm of theory (Moore and Philander 1977) when Wyrtki proposed it as a dynamical link between wind forcing in the central Pacific and SST warming in the east. Numerical modeling work supported the notion (McCreary 1976; Hurlburt et al. 1976), but observational verification would await the collection of moored-buoy time series measurements several years later. The ability to moor surface buoys on the equator was a major engineering accomplishment in the mid-1970s (Halpern et al. 1976), given that early attempts in the presence of the intense and highly sheared equatorial undercurrent had failed (Taft et al. 1974). Success depended on, among other things, the incorporation of airfoil-shaped plastic clip-on fairings to reduce drag in the areas of high current, an innovation designed by Hugh Milburn and his engineering staff at NOAA/PMEL. This mooring design was used in moored buoy arrays deployed in the late 1970s at 152° and 110°W, leading to the detection of an eastward propagating pulse in zonal transport along the equator in April–May 1980 that could unambiguously be identified as a Kelvin wave consistent with theory (Knox and Halpern 1982).
The 1982/83 El Niño (Cane 1983) proved to be a watershed moment in the history of ENSO research. This El Niño, the strongest of the twentieth century up to that time, was neither predicted nor even detected until nearly at its peak. There are several reasons for this failure (McPhaden et al. 1998). Principal among them were 1) inability of satellite infrared sensors to detect changing SSTs after the eruption of El Chichón in April–May 1982 had injected a cloud of aerosols into the stratosphere and 2) the lack of any real-time in situ data from the tropical Pacific to verify what was actually happening in the ocean. This failure riveted the scientific community at a time when it was planning a major, 10-year international study of El Niño, a program that would eventually be known as the TOGA program (McPhaden et al. 2010a). The failure to predict or detect the 1982/83 El Niño in a timely way underscored the need for TOGA to develop reliable El Niño forecasting techniques and a real-time ocean observing system that could support both seasonal prediction and research.
TOGA’s emphasis on seasonal forecasting was rewarded early with successful prediction of the 1986/87 El Niño (Cane et al. 1986; Barnett et al. 1988). To address TOGA’s observational challenges, Hugh Milburn and his staff developed the Autonomous Temperature Line Acquisition System (ATLAS) mooring, capable of providing SST, upper ocean temperature, surface winds, and other meteorological parameters in real-time via satellite relay (Milburn and McLain 1986). This mooring system (Fig. 3-17), engineered to be relatively low cost and robust with a 1-yr design lifetime, provided data every day to avoid aliasing energetic high-frequency variations into the lower seasonal time-scale variations of primary interest. The ATLAS was the essential building block for a bold proposal by Stan Hayes of PMEL to populate the entire equatorial Pacific end-to-end with these moorings, a concept he called the TOGA-TAO array (Hayes et al. 1991). It took the entire 10 years of TOGA to build this array through a NOAA-led partnership involving five nations (McPhaden 1995). When complete, TAO was heralded as the “crowning achievement of TOGA” (Carlowicz 1997).
TAO was the centerpiece of a tropical Pacific observing system that included other elements: Wyrtki’s tide gauge network, drifting buoys, and ship-of-opportunity expendable bathythermograph transects (Fig. 3-18). These in situ systems complemented a constellation of Earth observing satellites for surface winds, SST, and surface height. TAO and other elements of the observing system were in place to capture the evolution of the 1997/98 El Niño, the strongest on record. Unlike in 1982/83, this event was tracked day by day with real-time mooring data (McPhaden 1999), which revealed the striking prominence of short-period westerly wind-event forcing of energetic equatorial Kelvin waves that initiated and amplified ocean warming (Fig. 3-19). Data from the observing system were also fed into seasonal forecast models for prediction of evolving oceanic and atmospheric conditions at 6–9-month lead times (Barnston et al. 1999). Many of the climate impacts of this El Niño were reliably predicted months in advance, a success that prompted the Secretary-General of the World Meteorological Organization to declare “The array of moored buoys stretching across the Pacific Ocean, originally established by the TOGA programme … has been an invaluable source of data for monitoring and modeling the event” (Obasi 1998).
At the turn of the twenty-first century, TAO morphed into TAO/TRITON a partnership between NOAA and the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), with JAMSTEC’s TRITON moorings (Ando et al. 2017) occupying the western portion of the array. TRITON moorings were specifically designed to mimic the ATLAS data stream with similar accuracy and resolution so that there would be no jumps in the climate record. The ATLAS buoy design itself has gone through various design upgrades to take advantage of new sensor technologies, electronics, and data transmission advances like Iridium (Milburn et al. 1996; Freitag et al. 2018). New systems can carry a full payload of standard oceanographic and meteorological sensors: ocean temperature, salinity, velocity, air pressure, temperature, relative humidity, shortwave and longwave radiation, rain rate, and barometric pressure. In addition, the moorings can accommodate sensors for ocean microstructure, photosynthetically available radiation, sound, carbon, pH, oxygen, and other variables for specialized studies. Data from the newest systems are transmitted to shore in real time via satellite relay at 1- to 10-min sampling intervals depending on variable.
Moored surface buoys are ideal platforms from which to measure interactions between the ocean and the atmosphere and to see the energetic fast-time-scale processes that are evident both above and below the air–sea interface. The TAO moored buoy array concept was therefore expanded into the Atlantic Ocean for the study of tropical Atlantic climate variability through a program known as PIRATA (Bourlès et al. 2008), and later into the Indian Ocean to study the ocean’s role in monsoon dynamics through RAMA (McPhaden et al. 2009). Collectively, TAO/TRITON, PIRATA, and RAMA are known as the Global Tropical Moored Buoy Array (McPhaden et al. 2010b; Fig. 3-20]. They constitute a contribution to GOOS and are a tropical component of the OceanSITES program (Send et al. 2010).
Data from the moored buoy arrays are distributed, via the Global Telecommunications System, to operational centers around the globe for routine ocean, weather, and climate forecasting. They also serve as a primary dataset for many oceanic and atmospheric databases and for virtually all oceanic and atmospheric reanalysis products. Ships that regularly service these arrays provide platforms of opportunity for deployment of Argo floats and drifting buoys, helping to build and maintain these and other elements of GOOS. The shipboard measurements routinely collected on moored buoy servicing cruises are themselves a valuable climate record not obtainable through other means (e.g., Johnson et al. 2002). Some data records from the original TAO array sites are now 30–35 years long, supporting studies not only of seasonal and interannual time-scale variability, but decadal variability (Amaya et al. 2015), the decadal modulation of ENSO (McPhaden 2012), and ENSO diversity as manifest by the distinction between central Pacific (CP) and eastern Pacific (EP) El Niños (Lee and McPhaden 2010; Capotondi et al. 2015). In addition, the major 2015/16 El Niño (L’Heureux et al. 2017) was the first for which the combined impacts of global warming and El Niño began to emerge in the tropical Pacific (King et al. 2016; Zhang et al. 2016; Brainard et al. 2017). Thus, the value of these moored time series will only increase as the records become longer and the climate system continues to change (Cai et al. 2014).
In 2005, responsibility for operating TAO was transferred from NOAA/PMEL to NOAA’s National Data Buoy Center (NDBC). The array temporarily collapsed in 2013/14 for want of regular servicing (Tollefson 2014) at the start of what proved to be three successive years of warm El Niño conditions (McPhaden 2015; L’Heureux et al. 2017). JAMSTEC subsequently decommissioned most of its mooring sites in the western Pacific. As a result of these developments, and in view of advances in our understanding and technological capabilities, an international committee is currently reviewing the design of the tropical Pacific Ocean Observing System, with final recommendations as of yet still pending (Cravatte et al. 2016).
8. A century of coastal ocean observing
The coastal ocean’s importance to society is impetus to understand the circulation, water column structure, sea level, winds, and waves. About 50% of people live within 50 km of the coastal ocean, where we interact with the marine environment. The coastal ocean is used to conduct shipping and for recreation, and it is the environment that shapes the marine ecosystem response to wind and freshwater forcing. Coastal oceans worldwide are highly productive, where nutrients fuel the food chain from phytoplankton, zooplankton, and on to fish. Coastal upwelling regions account for only 1% of the World Ocean’s surface, but up to 25% of wild-caught seafood.
Interannual and interdecadal changes in open-ocean circulation and water properties are evident at the coast and can strongly influence fisheries take and coastal erosion. The El Niño/La Niña response on the U.S. West Coast is an example of how large-scale variability in the tropical Pacific arrives to influence coastal waters through both the oceanic waveguide and by atmospheric teleconnections (see ENSO, section 7). Under warming climate change, increased storminess and lower dissolved oxygen and pH will directly influence the coastal ocean. Coastal oceans support excellent sustainable fisheries, but can be adversely affected by harmful algal blooms, low-oxygen events (hypoxia or even anoxia), and ocean acidification. Measuring these requires specialized sensors and platforms, coupled with theoretical understanding and numerical models of ocean circulation and biogeochemical cycles.
Knowledge of the coastal ocean has immediate benefits. Mariners depend on knowledge of waves, winds, and currents. For the Coast Guard conducting search and rescue missions or predicting evolution of oil spills, knowledge of currents is critical. Navies study coastal ocean to understand underwater sound and light propagation as ships ply coastal waters. Shore property owners and engineers who build and maintain offshore structures like piers or jetties must know current strength, waves, and sediment transport. The coastal ocean is increasingly involved in producing renewable energy, either by extracting energy from currents and waves or by hosting offshore wind generation sites on fixed or floating platforms. As in most uses of coastal waters, knowledge of coastal circulation, winds, waves, and water-column structure are key.
a. Early coastal ocean observing
The importance of understanding the circulation, water temperatures, and salinities for fisheries led early researchers to focus on regions with important fisheries, for example, Georges Bank off the northeast U.S. and Canadian coasts. In the early 1900s, currents were measured using drift cards, mechanically recording propeller-type Ekman current meters from an anchored ship, and by determining ship drift via dead reckoning. Temperatures were measured by reversing thermometers and salinities with Nansen bottles and laboratory-based titrations. For a review of coastal ocean observing off the U.S. East Coast in the Middle Atlantic Bight, see Beardsley and Boicourt (1981). Bigelow (1915) used velocity estimates and surface water samples to map surface circulation and salinity from the Middle Atlantic Bight to the Gulf of Maine (Fig. 3-21). He showed the strong, continuous Gulf Stream offshore and more spatially variable and fresher coastal currents flowing around coastal features counter to the direction of the Gulf Stream. Bigelow’s chart of surface properties was remarkably accurate, revealing important features of the coastal ocean still being investigated. For example, a high-resolution section of water properties obtained by a towed SeaSoar equipped with a CTD provides a modern view of the shelfbreak front between the coastal and offshore waters in the Middle Atlantic Bight (Fig. 3-22). There is rich submesoscale variability around this front, including a “cold pool” inshore of the front and intrusions of both shelf and offshore water on either side of the front. The source and volume of these intrusions and the implied net heat and freshwater fluxes between the coastal and deep ocean are subjects of further observational and numerical modeling today.
On the U.S. West Coast, hydrographic surveys in the mid-1900s helped to characterize the flow, water-column structure, and seasonality of the California Current System (Sverdrup et al. 1942). Interest in how the physics of the California Current affect fisheries led to CalCOFI, which started quarterly physical and biological sampling in 1949 (Scheiber 1990). These ship-based measurements included CTD profiling augmented later with shipboard ADCPs. In addition to physical observations, net tows and water sampling characterized nutrients and the biological fields from phytoplankton to fish.
The importance of wind-driven, coastal upwelling was realized early on. In the 1960s and 1970s, the Coastal Upwelling Experiment (CUE) was conducted off Oregon as part of the International Decade for Ocean Exploration. The CUE observations included shipboard hydrographic sampling, shore station wind measurements, and moorings equipped with Aanderaa current meters recording current speed and direction and temperature at 5-min intervals. From a few moorings and cross-shelf hydrographic sections, much was learned about the dynamics of wind-driven upwelling (Smith 1974; Huyer et al. 1974). These observations spurred advances in theoretical models of coastal upwelling (e.g., Allen 1973) and the first numerical ocean circulation modeling of this process (e.g., O’Brien and Hurlburt 1972).
As moored instruments and moorings became more reliable in the late 1970, arrays of moorings were deployed to study the cross-shelf structure and seasonality of shelf circulation and water column structure. One such effort was the 1979 Nantucket Shoals Flux Experiment array south of Nantucket Island, Massachusetts (Fig. 3-23; Beardsley et al. 1985). The array of about 25 VACMs was maintained from March 1979 to April 1980 to measure the seasonal response of the continental shelf to wind and buoyancy forcing. The array performed well and revealed the evolution of the “cold pool” over the midcontinental shelf, forcing of local currents by wind, and influences of warm-core Gulf Stream rings at the outer edge of the array. The alongshore transport over the full shelf was found to be 0.38 ± 0.07 Sverdrups (Sv; 1 Sv = 106 m3 s−1) to the southwest, in opposition to the mean wind stress, a metric used to test numerical models of buoyancy forcing in this region ever since (e.g., Chapman et al. 1986).
b. Coastal ocean sea level
Measurements of coastal ocean sea level provide century-long records of fluctuating sea surface heights at locations around the world. Pugh (1987) describes ways that coastal sea level has been measured, from simple tide poles to satellite altimetry. Coastal sea level is used routinely for port, shipping, and recreational activities and in a variety of scientific studies from understanding daily tides to extracting long-term sea level response to climate change. Measurements of sea level propagating along the California coast were used by Munk et al. (1970) to decompose the signal into the free waves of the system. A successful model of the semidiurnal tide includes a freely propagating Kelvin wave, a single Poincaré mode, and a low-amplitude forced wave. The coastal tide is dominated by a northward-propagating Kelvin mode, but farther offshore the modes combine to form a predicted amphidrome whose existence was subsequently confirmed by Irish et al. (1971).
Wyrtki (1979) used filtered sea level to infer geostrophic velocities and study El Niño in the Pacific. Observing a non-isostatic response to atmospheric pressure changes along the east Australian coast led Hamon (1962) and Robinson (1964) to suggest their origin as continental shelf waves. Buchwald and Adams (1968) provided a more complete theoretical solution for continental shelf waves with periods of 8–10 days, motivating subsequent work on coastal trapped waves (CTWs) using coastal sea level observations (e.g., Chapman 1987). At time scales of months to years, Chelton and Davis (1982) used 29 years of nonseasonal monthly mean tide-gauge data along the North American west coast to explore nearshore ocean variability from Alaska to Mexico. Large-scale, interannual sea level anomalies were related to El Niño occurrences in the eastern tropical Pacific that propagate poleward with phase speeds of about 0.4 m s−1. Transport of the eastward North Pacific Current (West Wind Drift) bifurcates northward into the subpolar Alaska Current and southward into the subtropical California Current. In response to fluctuating, basinwide wind forcing patterns, the split of the transport into the northward Alaska Current or the southward California Current fluctuates out of phase.
Strub and James (2000) used coastal sea level stations to extend satellite altimetry measurements offshore to investigate California Current System surface velocities. Altimeter data from tracks that cross the coast are usually flagged as bad near the coast, leaving a ~30-km gap. Data are extended to the coast using tide gauges at coastal locations, filtered to remove the tides, and combined with the altimeter data. Twenty-day filtering removed CTWs. Strub and James (2000) used combined satellite altimeter–tide gauge data to study the large-scale, seasonal dynamics of California Current surface circulation. A new satellite altimetry mission promises to deliver sea level data over the coastal ocean with high enough spatial resolution to study coastal mesoscale features close to shore without contamination.
Finally, coastal sea level is used to estimate long-term trends in sea level over the last century or two. This is of high importance as sea level rises under global warming (Church and White 2006). Coastal sea level records are also used to examine the combined effects of high sea level (e.g., global warming) El Niño and increasing wave heights on increasing coastal erosion along the U.S. West Coast (Allan and Komar 2006).
c. Coastal ocean observations go 4D
Progress was made in the 1970s inspired by simplified two-dimensional models of wind-driven upwelling and buoyancy-driven currents, even though the coastal ocean varies in all three spatial dimensions. Satellite maps of sea surface temperature made this crystal clear by the mid-1970s (Bernstein et al. 1977). Figure 3-24 shows U.S. West Coast summertime upwelling under southward winds. Regional differences are apparent, for example, between the cold upwelling off central California and Oregon and the warm California Bight south of Point Conception. Strong along-coast variability on various scales is apparent, resulting from intrinsic hydrodynamic instability and interactions between the strong alongshore coastal upwelling jet and coastal (points, capes, bays) and bathymetric (submarine banks, canyons) features (Barth 1994).
As satellite motivated three-dimensional coastal studies, ocean current measurement technology was improved by use of moored mechanical current meters. A few tens of current meters had been deployed in two-dimensional, cross-shelf arrays (e.g., Fig. 3-23), but a concerted effort was needed to bring enough oceanographic sensors to bear to transform understanding of coastal circulation. Thus was born the Coastal Ocean Dynamics Experiment (CODE) of the early 1980s (Beardsley and Lentz 1987), a multi-institutional, multi-investigator study off Northern California (Fig. 3-25). The elements of the CODE were 1) 60 VACMs; sensors for wind, air and water temperature, solar radiation, conductivity, water pressure; and the first large-scale deployment of VMCMs (Weller and Davis 1980); 2) shipboard observations of water temperature, conductivity, and velocity as a function of depth; and 3) aircraft observations of wind velocity and stress, air temperature and humidity, and sea surface temperature. Surface drifters were tracked from shore and by aircraft, and sea surface temperature and ocean color were measured by satellite.
CODE data greatly advanced knowledge of wind-driven coastal ocean dynamics. The results, summarized in Beardsley and Lentz (1987) and collected by Lentz (1990), include coastal ocean response to wind forcing; surface and mixed layer dynamics in the coastal ocean (Lentz 1992); the creation of warm, poleward flows adjacent to the coast during wind “relaxations” (Send et al. 1987); and offshore “squirts” of coastal water (Davis 1985).
To better understand how local coastal currents and water-column structure can be forced by remote winds via CTWs (e.g., Gill and Schumann 1974; Allen 1976; Clarke 1977), the CODE intensive array was complemented by a larger-scale array of current meters, coastal sea level stations, and wind observations called SuperCODE (Denbo and Allen 1987). Earlier coastal experiments had used CTW theory to understand how local and remote winds combine to force changes in sea level. Observations of wind and sea level along the U.S. West Coast agree with predictions using a wind-forced, damped, first-order wave equation based on CTW theory (Fig. 3-26; Halliwell and Allen 1984). The SuperCODE array tested CTW theory for predictions of both sea level and along-shelf velocity (Brink et al. 1987; Chapman 1987). Understanding both local and remote forcing is central to modern coastal ocean observing.
d. Two new ways to measure coastal ocean currents
A major step in measuring coastal ocean currents was the use of ADCPs (see section 5a). Shipboard ADCPs were used in CODE (Kosro 1987), in many coastal ocean experiments to follow, and are now permanently installed in oceanographic vessels. A novel use of ADCPs is to measure profiles of turbulent Reynolds stresses through the water column (e.g., Lohrmann et al. 1990; Gargett 1994; Stacey et al. 1999). This ADCP turbulence measuring technique has been applied to a wide range of coastal boundary layer and estuarine studies.
Wide areas of ocean surface currents can be measured by land-based, high-frequency radar (HFR). This technique originated in the mid-1950s (Crombie 1955) and was advanced in the 1970s (e.g., Barrick et al. 1977). The technique relies on the backscatter of 3–50-MHz radio waves broadcast from shore stations (Paduan and Washburn 2013) off ocean waves with half the wavelength of the incident radio wave (Bragg scattering). Motion of the surface waves induces a Doppler frequency shift of the radio return due to ocean currents and ocean wave propagation (known from the dispersion relation). Range gating provides radial spatial resolution and a two-dimensional map is obtained if two land stations are used. Depending on the radio frequency, ranges vary from a few kilometers using 40 MHz to over 200 km at 5 MHz, and provide hourly surface velocity maps with horizontal resolution of 0.3 (at 40 MHz) to 6 km (at 5 MHz), with accuracies of 0.05–0.1 m s−1. Today, HFR systems are deployed along much of the U.S. coast and in many international locations. HFR data have extended our understanding of the coastal ocean response to wind forcing over an entire coastline and measured tidal, inertial, and propagating CTW velocity signals (Fig. 3-27). HFR systems are now a routine element of modern ocean observing systems and the data are used to improve data-assimilating coastal ocean circulation models (e.g., Oke et al. 2002).
e. Multiscale, multiplatform coastal ocean observing
A modern coastal ocean observing system is based on a multiscale, multiplatform approach, more often than not combining observations with high-resolution numerical ocean circulation, and sometimes biogeochemical, models. Establishing “best practices” in coastal ocean observing was aided by the National Ocean Partnership Program (NOPP) in the late 1990s. With NOPP support, many groups pursued a variety of coastal ocean observing and modeling efforts that resulted in establishing the essential elements of an observing system. These elements include moorings, bottom landers, autonomous sea surface and underwater vehicles, and shore and estuarine stations; each platform supports a wide variety of physical, biological, and chemical sensors. Power is supplied by batteries, augmented on site by power generation from the wind and sun, and sometimes by seafloor cables. Data are transmitted to shore via cell phones, Iridium satellite, and, if it exists, a seafloor cable. Land-based HFR and satellite observations complement the in-water observing platforms. Data are often disseminated to ocean users in near-real-time via the various regional Integrated Ocean Observing System groups. Ocean data are assimilated into numerical ocean circulation models and are used to verify and improve ocean biogeochemical models (e.g., Siedlecki et al. 2015). An example of a multiscale, multiplatform coastal ocean observing system spanning national borders and operated by academic, government, and private entities is shown for the northeast Pacific in Fig. 3-28.
f. To the future
Much remains to be understood about coastal ocean dynamics over a huge scale range from O(1000) km boundary currents to sharp coastal fronts of O(10) m. Coastal oceans remain challenging places to sustain measurements in the face of large breaking waves, winter storms, and, on polar shelves, ice. The “inner shelf” between the wind-driven continental shelf and the breaking-wave surf zone has received inadequate attention. The inner shelf, the link between the coastal ocean and sandy beaches and rocky shores, is made difficult to observe by breaking waves and strong currents. Recently, a multi-institutional group of coastal oceanographers took the challenge of measuring in the inner shelf using over 150 moorings with temperature sensors and ADCPs from the 50-m isobath to the surfzone off Point Sal, California (Lerczak et al. 2019). This array, motivated by the 1980s CODE program, was complemented by shipboard measurements using towed profilers and instrumented bow chains from ships ranging in length from 5 to 73 m. Progress is being made on low-frequency wind-driven motions, high-frequency nonlinear internal waves, mixing and dissipation in shallow water, and flow–topography interactions.
Future effort is needed on a range of coastal ocean phenomena, including submesoscale dynamics, Arctic shelf dynamics, and storm surge from hurricane-strength storms. The latter has become more important in light of the impacts to society and infrastructure from Superstorm Sandy in 2012 and the three major 2017 hurricanes, Harvey, Irma, and Maria. There is growing evidence for increased wave amplitudes and commensurate coastal erosion with climate change; this needs attention (Ruggiero et al. 2010). We must continue to measure the connection between the deep and coastal ocean, as these are inextricably connected through delivery of open-ocean properties like nutrients, dissolved oxygen, and pH to the coast. In turn, the coastal ocean fluxes freshwater, potential pollutants, and harmful algal blooms offshore.
There is great potential for further use of acoustics for understanding coupled physical and biological processes in the coastal ocean, for example, the association of zooplankton, fish, and predators at ocean fronts. Newly developed optical imaging systems for phytoplankton, zooplankton, and larval fish are ready for greater use alongside physical oceanographic sensors. Genomic sensors to detect populations of harmful algae show increasing promise. New sensors like optodes for dissolved oxygen and field-effect transistors for pH provide opportunities for sensing, understanding, and predicting the stressors of hypoxia and ocean acidification.
E. Lester Jones, writing in his 1918 U.S. Coast and Geodetic Survey report (Jones 1918), called for “measurements of the direction and velocity of the currents, direction and velocity of the wind … to establish a definite relation between the currents and certain other meteorological conditions.” He concluded by stating, “The Pacific coast has been woefully neglected, and years of work are required to complete this important task.” Since Jones’ call to action, we have made tremendous progress observing the coastal ocean using a wide variety of instruments and measurement platforms over the last 100 years. These data allow us to continue to expand our understanding and ability to predict coastal ocean physical processes and how they shape and influence marine ecosystems in this vitally important part of the world’s ocean.
9. Observing the polar oceans
Polar ocean observing has a long and colorful history, arguably beginning with Fridtjof Nansen’s remarkable 3-year expedition aboard Fram that attempted to reach the North Pole (http://frammuseum.no/polar_history/expeditions/the_first_fram_expedition__1893-1896_/). In addition to documenting drift of the ice pack, the sea-going team made a variety of atmospheric and oceanographic measurements during the voyage and contributed to numerous papers and scientific volumes. Notable among these was Nansen’s observation of the relatively warm Atlantic Water layer below the cold halocline and indication that sea ice tended to drift to the right of the wind direction (Nansen 1902). Nansen reportedly described this ice drift observation to Bjerknes, who in turn tasked a junior assistant, V. Walfrid Ekman, with investigating the possible physical process. Ekman’s resulting paper (Ekman 1905) remains a foundation of wind-driven ocean circulation theory. Scientific observation was also a central design focus of British Antarctic expeditions by Scott and Shackleton as well as explorers from other nations early in the twentieth century.
Beyond extreme environmental conditions, particularly in winter, the sea ice that blankets the polar oceans is a major impediment to accessing these regions. As reviewed by Coachman and Aagaard (1974), there are three general approaches to high-latitude observing: vessel based (either measuring directly from an icebreaker or using the ship to stage a manned camp deployed on an ice floe), aircraft based (typically for short-term observation), and autonomous (instruments deployed on/below the sea ice or on subsurface moorings). Prominent in the first approach are the series of Arctic-drifting USSR/Russian ice camps, initiated in 1937 with North Pole-1 (NP-1; Shirshov and Fedorov 1938). Notably, NP-1 was staged using aircraft, not a ship, but after a 274-day drift, the camp’s four observers were eventually evacuated by icebreaker in the Greenland Sea. Over subsequent decades, Soviet and Russian investigators occupied a series of NP drifting stations; the most recent is number 41, termed “NP-2015” for the year that it was occupied.
Similar, long-term drifting stations were staged in the west, beginning with Project ICICLE (In-Cloud Icing and Large-Drop Experiment) in 1952 that established a weather station on an iceberg thought to have broken off from an Ellsmere Island glacier. Ice camps were also maintained during the International Geophysical Year (1957–58), including Drift Station Alpha, from which velocity profile data were collected. Hunkins (1966) interpreted the observed clockwise turning of the velocity with depth below the ice floe in terms of Ekman dynamics. [Interestingly, Hunkins notes that Brennecke (1921) reported counterclockwise spiraling with depth in data from the Weddell Sea, as Ekman theory predicts.] Subsequent manned ice station programs include the Arctic Ice Dynamics Joint Experiment (AIDJEX; a pilot study in 1972 and the main program in 1975–76), the Fram III Expedition (1981–82), the Surface Heat Budget of the Arctic Ocean study (SHEBA; 1997–98), and the Norwegian Young Sea Ice cruise (N-ICE2015) program (2015), to note a few.
In the Southern Ocean, the international Ice Station Weddell project (1992), conducted from R/V Akademic Fedorov with the support of R/V Nathanial Palmer, has been characterized as the first scientific drifting station in the South. That expedition was followed by Ice Station POLarstern (ISPOL) conducted in 2004 that was led by scientists from the Alfred Wegener Institute. Final planning is now underway for the international Multidisciplinary Drifting Observatory for the Study of Arctic Climate (MOSAiC) in the Arctic scheduled for 2019–20 aboard the icebreaker Polarstern (https://www.mosaic-expedition.org/).
Again, citing Coachman and Aagaard (1974), aircraft-supported surveys of the Arctic Ocean were pioneered by Soviet researchers shortly after WWII. Deep hydrographic data collected and analyzed by Worthington and others during an early U.S. effort called Operation SKIJUMP I and II suggested the existence and sill depth of the Lomonosov Ridge that bisects the Arctic (Worthington 1953). Modern examples of aircraft-supported ocean sampling include annual expeditions in conjunction with the North Pole Environmental Observatory (initiated in 2000, http://psc.apl.washington.edu/northpole/index.html) and the companion Switchyard program (http://psc.apl.washington.edu/switchyard/index.html), both led by researchers at the University of Washington Polar Science Center. Using small, twin-engine, fixed-wing aircraft capable of landing on, and taking off from, relatively smooth patches of sea ice (oftentimes frozen leads), grids of hydrographic stations were occupied each spring using specialized instrumentation and water samplers designed to fit through a small ice hole drilled at each site. The former of these programs builds in part on logistics established to support the annual tourist program Barneo that transports outdoor enthusiasts to the North Pole each spring (https://en.wikipedia.org/wiki/Barneo).
Typically, only one or at most a few drifting icebreakers or ice camps have been active in the Arctic at any one time, and only a handful of such programs have ever been fielded in the Southern Ocean. The historical database of polar ocean observations is consequently sparse and seasonally biased. Nevertheless, human-based sampling of the polar oceans is now, and will likely continue to be, a critical leg of the high-latitude ocean observing triangle because there remain a host of parameters for which there are no viable autonomous sensors (just as is the case at temperate and tropical latitudes). Examples include oxygen isotope analyses used to distinguish meteoric water from ice melt, biological taxonomy/genomics analyses, and longwave radiation, to name just three. Also, as at lower latitudes, ocean currents and a variety of water properties are observed autonomously at polar latitudes over long periods.
Conventional bottom-anchored subsurface moorings and associated instrumentation, as described in section 5, function acceptably in the polar oceans, but the sea ice can present issues with deployment and recovery. (So-called “anchor-last” deployments are typical because surface waves are strongly damped in the marginal ice zone and virtually absent in solid ice cover.) While profiling floats and gliders have revolutionized ocean observation at temperate and tropical latitudes, their standard versions are not well suited to high-latitude work owing to their need to surface to geolocate and telemeter data. Sea ice can block surface access and/or damage instruments via impact and/or compression. Specialized profiling floats and gliders have been developed and fielded in recent years that sense the upper ocean conditions and do not attempt to surface if sea ice may be present (e.g., Klatt et al. 2007); acquired data are stored on board and only telemetered when the device can surface. A shortcoming is that geographic positions may not be available for the subsurface observations, although this can be addressed by establishing regional acoustic ranging networks (Lee and Gobat 2008). In the absence of such networks, dead reckoning is used to assign position estimates for subsurface observations. With time intervals between surfacing often extending to several months over winter, dead reckoning position uncertainty can be significant. Possibly, due to differences in upper ocean stratification and sea ice conditions, conditional surfacing schemes installed in profiling floats have thus far found more success in the Southern Ocean than the Arctic.
An innovative related approach to sampling the polar ocean, particularly in and about marginal ice zones, is to temporarily mount small instrument systems on sea mammals (Roquet et al. 2017; see www.meop.net). To date, several hundred diving marine animals, mainly Antarctic and Arctic seals, have been fitted with sensor tags that measure various ecological parameters along with oceanographic data (e.g., temperature, conductivity, pressure). Instrumented sea mammals have collected more than 500 000 vertical profiles of temperature and salinity since 2004 to depths as great as 2000 m. Absolute calibration of these data remains an issue, particularly for conductivity.
An alternate technology for sampling the ice-covered oceans has been increasingly fielded over the last decade—ice-tethered instrument systems. Generically, these systems support subsurface instrumentation on a tether suspended from a buoy that is frozen into the sea ice. The buoy and underwater attachments drift with the supporting ice floe. Data from the subsurface instruments are relayed to the surface buoy through conductor(s) in the tether and are subsequently telemetered via satellite to shoreside users. Systems have been deployed with a series of discrete sensors at fixed position along the tether or with a single sensor package that moves up and down the tether to acquire high-vertical-resolution profiles. An example of the latter is the WHOI-developed Ice-Tethered Profiler (ITP) system (Krishfield et al. 2008). Thanks to support from the U.S., Canadian, European, and Asian agencies and collaborators, nearly 100 ITP systems were fielded in the Arctic since 2004, collectively returning more than 75 000 profiles of upper ocean temperature and salinity, with a subset of systems also making velocity or biogeochemical observations. Through a mix of observations from autonomous and manned instrument systems, the deep Arctic can no longer be characterized as poorly sampled, Fig. 3-29. That said, the shelf regions remain a challenge as do system lifetimes in the dynamic sea ice domain.
Russ Davis, Dean Roemmich, Dan Rudnick, Robert Weller, and Michael McPhaden acknowledge the NOAA Climate Program Ocean Observing and Monitoring program for support of this effort. John Toole’s contributions were supported by the National Science Foundation. Jack Barth acknowledges National Science Foundation and NOAA NOS’s support through the Integrated Ocean Observing System. Lynne Talley acknowledges support from NSF GEO, which provides part of the support for the U.S. GO-SHIP program. This chapter is PMEL Contribution 4758.
Coauthors are listed in the order of their primary sections.