GeoColor: A Blending Technique for Satellite Imagery

Steven D. Miller Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, Colorado

Search for other papers by Steven D. Miller in
Current site
Google Scholar
PubMed
Close
,
Daniel T. Lindsey National Oceanic and Atmospheric Administration/National Environmental Satellite, Data, and Information Service/Center for Satellite Applications and Research/Regional and Mesoscale Meteorology Branch, Fort Collins, Colorado

Search for other papers by Daniel T. Lindsey in
Current site
Google Scholar
PubMed
Close
,
Curtis J. Seaman Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, Colorado

Search for other papers by Curtis J. Seaman in
Current site
Google Scholar
PubMed
Close
, and
Jeremy E. Solbrig Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, Colorado

Search for other papers by Jeremy E. Solbrig in
Current site
Google Scholar
PubMed
Close
Open access

Abstract

Value-added imagery is a useful means of communicating multispectral environmental satellite radiometer data to the human analyst. The most effective techniques strike a balance between science and art. The science side requires engineering physical algorithms capable of distilling the complex scene into a reduced set of key parameters. The artistic side involves design and construction of visually intuitive displays that maximize information content within the product image. The utility of such imagery to human analysts depends on the extent to which parameters or features of interest are conveyed unambiguously. Here, we detail and demonstrate a dynamic blended imagery technique, based on spatially variant transparency factors whose values are tied to algorithmically isolated parameters. The technique enables seamless display of multivariate information, and is applicable to any imaging system based on red–green–blue composites. We illustrate this technique in the context of GeoColor—an application of the Geostationary Operational Environmental Satellite R (GOES-R) series Advanced Baseline Imager (ABI) supporting operational forecasting and used widely in public communication of weather information.

Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/JTECH-D-19-0134.s1.

Denotes content that is immediately available upon publication as open access.

© 2020 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Steven Miller, steven.miller@colostate.edu

Abstract

Value-added imagery is a useful means of communicating multispectral environmental satellite radiometer data to the human analyst. The most effective techniques strike a balance between science and art. The science side requires engineering physical algorithms capable of distilling the complex scene into a reduced set of key parameters. The artistic side involves design and construction of visually intuitive displays that maximize information content within the product image. The utility of such imagery to human analysts depends on the extent to which parameters or features of interest are conveyed unambiguously. Here, we detail and demonstrate a dynamic blended imagery technique, based on spatially variant transparency factors whose values are tied to algorithmically isolated parameters. The technique enables seamless display of multivariate information, and is applicable to any imaging system based on red–green–blue composites. We illustrate this technique in the context of GeoColor—an application of the Geostationary Operational Environmental Satellite R (GOES-R) series Advanced Baseline Imager (ABI) supporting operational forecasting and used widely in public communication of weather information.

Supplemental information related to this paper is available at the Journals Online website: https://doi.org/10.1175/JTECH-D-19-0134.s1.

Denotes content that is immediately available upon publication as open access.

© 2020 American Meteorological Society. For information regarding reuse of this content and general copyright information, consult the AMS Copyright Policy (www.ametsoc.org/PUBSReuseLicenses).

Corresponding author: Steven Miller, steven.miller@colostate.edu

1. Introduction

From the earliest forays into satellite-based environmental monitoring, nadir-viewing camera and imaging sensors have provided a unique perspective on our planet. The provenance of meteorological satellites traces back to a singular event on 17 July 1929 in Auburn, Massachusetts, when Dr. Robert H. Goddard launched a 3.35 m long liquid-fuel rocket, carrying a parachute-deployed camera, thermometer, and barometer to a grand height of 27 m above the surface (www.spaceline.org/history/22.html). After considerable advances made to rocketry and imaging technologies developed during World War II, spurred on by the ensuing Cold War between the United States and the Soviet Union, the satellite era as we know it today took flight on 1 April 1960 with the launch of the Television and Infrared Observational Satellite 1 (TIROS-1). From its ~650 km orbit altitude, the two camera instruments on TIROS-1 offered a new perspective on meso- to synoptic-scale weather systems. The first images of TIROS-1, however grainy and unimpressive by today’s imagery standards, belied a paradigm shift to how we would observe weather/climate processes and advance our numerical analysis and forecasting capabilities in the decades to come.

The imaging systems of these pioneering satellites consisted of only a few spectral bands (e.g., broadband visible and atmospheric window infrared). Since then, marked advances to the spectral, spatial, temporal, and radiometric resolution of modern day satellite imaging radiometer systems have augmented environmental characterization capabilities dramatically. The newest generation of geostationary satellite imagers, represented by Japan’s Himawari-8/9 Advanced Himawari Imager (AHI; Bessho et al. 2016; Murata et al. 2018), the U.S. Geostationary Operational Environmental Satellite (GOES)-R series Advanced Baseline Imager (ABI; Schmit et al. 2017), and most recently Korea’s Geostationary Korea Multi-Purpose Satellite 2A (GEO-KOMPSAT-2A) and its Advanced Meteorological Imager (AMI; Park et al. 2016), offer 16 broad spectral bands spanning ~0.4–14 μm, with 0.5–2.0 km spatial resolution, temporal resolution as fine as 30 s (for mesoscale domains), and up to 14-bit radiometric pixel depth. The performance places these sensors on par with low-Earth-orbiting imagers, and offer a detailed and quantitative description of a wide array of surface and atmospheric parameters (Schmit et al. 2017, 2018).

Commensurate with these sensor advances are improvements to the overall quality and information content of the imagery. However, with the copious volumes of new data come unique challenges as well. Namely, with so many independent pieces of information now available, it is impractical for a human analyst to consider them independently, particularly in a time-critical operational forecast setting. In many cases, isolating a unique signal (or physically based “spectral fingerprint”) characteristic of a given environmental parameter requires the interrogation of multiple spectral bands, including comparative techniques such as channel differencing or channel ratios.

Multispectral imagery provides a practical means of visualizing the same signals that are used by quantitative retrieval algorithms, making the fusion of such value-added imagery and derived products extremely valuable. As sensors continue to advance toward hyperspectral capabilities (e.g., Transon et al. 2018), where literally thousands of spectrally narrow channels are used to resolve broader spectral regions, a strategy for effective distilling of information and communicating to the human analyst becomes an increasing imperative and challenge.

This paper offers a new approach in this regard, with the discussion structured as follows. Section 2 provides relevant background on the common variants of imagery compositing and blending. Section 3 describes the philosophy and mechanics of dynamic multidimensional blending. Section 4 illustrates the application of this blending technique to produce GeoColor (V1.0), a form of blended imagery designed to anticipate the capabilities of GOES-R ABI. Section 5 proceeds to detail GeoColor V2.0, including application to actual GOES-R ABI data and other examples. Section 6 concludes the paper with a discussion of challenges and a perspective on future applications of multidimensional blending.

2. Background on imagery techniques

As a way of providing context and motivation for multidimensional imagery blending, we begin with a discussion of conventional imagery rendering techniques, adding levels of complexity incrementally.

a. Red–green–blue composite imagery

A popular technique for displaying multispectral satellite imagery is the red–green–blue (RGB) composite (d’Entremont and Thomason 1987). It is so called for its basis in the additive color model, where red, green, and blue primary colors combine to describe the full spectrum of color space. Images rendered via RGB are sometimes referred to generally as false color, for the arbitrary use of color to enhance features of interest via their spectral fingerprints. False color RGB images offer a distinct advantage over conventional 8-bit imagery in their ability to utilize all the available color space. Whereas 8-bit images are presented typically either in grayscale, or indexed to a predefined 256-element color palette, RGB imagery offers access to 24-bit color space. Specifically, RGB provides 8 bits of information to each of the three color components, yielding ~16.78 million color possibilities. This diverse color space is a potential boon of information to the human analyst; whereas we can distinguish only about 30 shades of gray, the three cone cells of our retinas can distinguish about 10 million unique colors (Kreit et al. 2013).

The principal limitation of conventional false color RGBs is relatively limited control over the immense 24-bit color space (>16.7 million possible colors, compared to the human eye’s sensitivity to roughly 1 to 10 million colors), particularly in terms of how environmental scene constituents are depicted when applying various channel combinations to the RGB components. Whereas the scene feature(s) of interest may be well enhanced via simple comparative channel techniques, it often partners with arbitrary and potentially distracting or even ambiguous (false alarm) coloration imparted to other scene constituents. Preprocessing the data to isolate and quantify information of interest helps to mitigate these problems, but doing so may come at the expense of losing other potentially important scene information that gives context to the human analyst pertaining to the feature of interest. As such, most RGB imagery developers elect to retain the full scene information, tolerating any color artifacts, and then instruct users on proper color interpretation via training materials—examples of which include the Cooperative Program for Operational Meteorology, Education and Training (COMET), the Virtual Institute for Satellite Integration and Training (VISIT), and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) International Training Project (EUMETRAIN). Another important benefit of these RGB techniques are that their simplicity makes them readily portable to various processing and display systems.

A special class of RGB imagery, wherein the red–green–blue spectral bands are matched to the respective RGB color components, is true color imagery (e.g., Miller et al. 2012; Hillger et al. 2011; Bah et al. 2018). The intended effect of true color is to approximate the appearance of a daytime scene as it would be perceived via human color vision without deficiency. Technically, true color is a special case of false color, as the satellite channels do not map identically to the human eye’s retinal spectral response functions for normal photopic (color) vision (Miller et al. 2016). However, in many cases the mapping is sufficiently close that true color offers a good qualitative approximation to color photography. Thus, true color provides a visually intuitive baseline and training aid for the interpretation of other false color enhancements, as well as an effective means to communicate satellite imagery to the public.

b. Imagery blending strategies

1) Binary stitching

Basic forms of satellite imagery involve the display of only one field (i.e., one spectral band, or one derived parameter) at a time. More advanced displays may juxtapose two different fields in the horizontal dimension (i.e., side by side), in the vertical dimension (i.e., the stacking of multiple fields), or in both dimensions. These combinations can be done in a simple way via binary “either/or” logic.

One example of binary blending in the horizontal dimension is a stitched composite of daytime visible (VIS) and nighttime infrared (IR) imagery. Using a threshold value of solar zenith angle (e.g., 90°, defining the terminator) as the stitching line, the data are scaled over a specified range of values (more about this in section 3) and displayed with VIS on the dayside and IR on the nightside of the stitching line. Such imagery provides continuous (24-h) coverage that gives day/night context.

Binary stitching in the vertical dimension combines upper and lower imagery layers, with either full opacity or full transparency applied to portions of the upper layer (again, the either/or logic applied here). Regions of upper-layer opacity obscure the lower layer, while regions of upper-layer transparency reveal the lower layer. For example, we may consider vertically stacking an IR image atop a VIS image, with opacity applied only to the coldest portions of the IR image and full transparency applied to warmer portions. In this way, deep convection (typically, much colder than the surrounding clear-sky scene) is highlighted in IR temperature data indexed to a color bar, with VIS imagery displayed in the nonconvective (and warmer IR, made transparent) regions.

2) Uniform partial transparency

A more sophisticated approach to imagery blending is assignment of partial transparency to one of the imagery layers, such that horizontal blending occurs as a fade instead of as an either/or (i.e., binary) toggling between the two layers. In the simplest approach, partial transparency is assigned uniformly to the upper layer of a two-layer imagery stack, allowing for a fade between the two layers as the transparency factor is modulated. The approach is useful for comparing imagery dynamically, but can be less useful in situations when only a small portion of the upper layer contains valid/relevant data, and the lower layer is altered by whatever background color was applied to the upper layer.

An example of uniform partial transparency is the “sandwich product,” which blends a scaled, color-enhanced IR image atop a grayscale VIS image (e.g., Setvák et al. 2013). Here, a constant (spatially invariant) transparency factor is applied to an upper layer composed of IR imagery—providing a bleed through of the underlying VIS layer information. The VIS imagery shows detailed cloud-top texture features (e.g., shadows associated with high-relief cloud-top structure), while the IR imagery provides insight on the temperature structure.

An example of the sandwich product is shown in Fig. 1 for a cluster of storms in the southeast United States, GOES-16 ABI at 2319 coordinated universal time (UTC) 6 April 2018. Here, the blending of IR imagery atop VIS reflectance at a semitransparency of 70% allows the texture and shadows of the VIS image to bleed through, highlighting overshooting storm tops. Such detail provides forecasters with valuable insight on the locations and nature of the most intense embedded convection.

Fig. 1.
Fig. 1.

Example of dynamic imagery blending via the “sandwich product” for a GOES-16 ABI image of thunderstorms at 2319 UTC 6 Apr 2018. (a) Color-enhanced infrared imagery are superimposed upon (b) visible reflectance imagery at a spatially uniform transparency factor of 70% to yield (c) the blended image.

Citation: Journal of Atmospheric and Oceanic Technology 37, 3; 10.1175/JTECH-D-19-0134.1

3) Dynamic blending

Still more sophisticated imagery blending enlists a spatially variable partial transparency, wherein a transparency factor (N ∈ ℜ[0.0, 1.0], with 0.0 being completely transparent and 1.0 being completely opaque) is associated with each pixel of the image to be blended. In case of horizontal blending between daytime VIS and nighttime IR imagery across the terminator region, the aforementioned binary stitching approach can be smoothed by introducing a dynamic transparency factor whose value is indexed to a normalized range of solar zenith angles (via math described in section 3). In the case of a two-layer imagery stack, a dynamic transparency factor associated with the top layer provides a spatially variable blending with the lower layer.

The dynamic transparency factor approach has been implemented on Google Earth to enable a dynamic global cloud layer (Turk et al. 2010). Here, a scaled value of the infrared brightness temperature (BT) is used as a proxy for upper-layer transparency (e.g., colder temperatures index to less transparency). The lower imagery layer of the two-layer stack is a high resolution “true color” base map. For daytime imagery, the magnitude of VIS reflectance is indexed to transparency in a similar way. The key difference between this technique and the aforementioned sandwich product is that Google Earth cloud layer defines a variable transparency factor at every pixel in the image, as opposed to a uniform transparency factor applied to all pixels.

An example of both the dynamic transparency factor and vertical binary-stitching (mentioned in section 2b) approaches is shown in Fig. 2, where a variably transparent cloud layer is overlaid upon a true color base map, the result of which is then overlaid by a layer of radar-derived precipitation information. The radar layer is displayed as a binary stitch, with zero transparency (i.e., full opacity) assigned wherever valid radar data above a threshold reflectivity value are present, and full transparency to this imagery layer elsewhere. A nuance obviated by this example is that a binary stitch can be done simply by assigning (0/1) values to a continuously defined transparency factor. The information is maintained as individual imagery layers in the Google Earth application (provided in portable network graphic (PNG) format, with transparency information assigned to a so-called α layer comprising a 32-bit RGBα image), allowing for interactive toggling of layers in a way reminiscent of a geographic information system (GIS) framework.

Fig. 2.
Fig. 2.

Example of the dynamic transparency factor for information displayed in Google Earth, following the technique of Turk et al. (2010). Radar-indicated precipitation (rainbow color) with zero transparency for valid data and a variable transparency infrared-based cloud field (white/gray) overlay a static true color surface background.

Citation: Journal of Atmospheric and Oceanic Technology 37, 3; 10.1175/JTECH-D-19-0134.1

c. Generalized multilayer, multidimensional imagery blending

The multidimensional blending technique generalizes the dynamic transparency factor approach to accommodate multiple nested imagery layers in the vertical dimension and seamless blending in the horizontal dimension. Thus, it is extendable to as many layers as required for the effective display of available imagery information. GeoColor, an instantiation of multidimensional blending, was developed initially to demonstrate to National Weather Service (NWS) and Department of Defense (DoD) forecasters the expected performance of the GOES-R ABI, using true color base maps from the Moderate Resolution Imaging Spectroradiometer (MODIS) Blue Marble dataset [National Aeronautics and Space Administration (NASA) Earth Observatory]. In sections to follow, we develop the generalized multidimensional blending approach, and illustrate its performance in the context of the GeoColor application.

3. Methodology

This section describes the mathematical construct for multidimensional blending, that is, multiple nested imagery layers, each governed by an associated field of dynamic transparency factors. The blending technique itself is generally applicable to any form of RGB digital imagery, including those based on nongeospatial datasets. When properly engineered, the result of multidimensional blending is a powerful and visually intuitive presentation of disparate information. Efficacy of the preprocessing steps, done algorithmically to isolate various physical parameters of interest, is of paramount importance to the overall control, quality, and unambiguous communication of the information. The blending of multiple layers is done via dynamic transparency factors assigned to each layer as described for a two-layer system in section 2b(3). Doing so allows for spatially dynamic bleed through of information from the underlying layers for nonopaque upper layers.

To illustrate the mathematical construct of multidimensional blending, we revisit the simple case of two imagery layers, each defined by RGB components and each of identical pixel dimension (for geospatial imagery, the assumption is that the images are coregistered). If we imagine these two imagery layers stacked atop each other and viewed from above, there will be a foreground (FR,G,B, or upper layer) and background (BR,G,B, or lower layer) image. Conceptually, our goal is to produce a combined RGB image (CR,G,B) that blends the two image layers via a spatially variable transparency factor (T)—one that is defined at every pixel location in the image. The transparency factor itself may be RGB component dependent, but for simplicity here it will be assumed as “gray” (RGB independent). Modulating the transparency factor across the color components would modulate the information layer’s native color. An example of this modulation, which we refer to as feature imprinting, is presented in the appendix.

To describe the blending operation numerically, we first introduce a generalized normalization operator, N(x), defined at every pixel in the image for parameter x over a predetermined scaling bound interval of physical parameter space [y1, y2]:
N(x)[y1,y2]={0(ifx<y1)(xy1)/(y2y1)(ify1xy2)1(ifx>y2).
The value of N is thus a real number defined over the interval [0.0, 1.0]. The physical scaling bounds [y1, y2] are set to represent the minimum and maximum values of the acceptable range for the physical parameter represented by that layer. For example, if the layer represents lower-tropospheric cloud temperatures, the bounds might be defined between the expected range of temperatures from the surface to the 850 hPa level, as determined from model, sounding, or by regional climatology. If the layer is a visible reflectance, the bounds may be set between 0% and 100% reflectance, or some other range of reflectance as required to emphasize the feature of interest. If the layer is a retrieved parameter, such as cloud-top height, and the desire is to emphasize the high clouds, the bounds can be set to an expected minimum/maximum range for that parameter (e.g., 10 to 15 km). The objective of Eq. (1) is to produce a normalized version of that parameter that captures the physical range of user interest, which can then be combined with other normalized parameters.
The normalized parameter provided by Eq. (1) can then be used as a transparency factor, governing the blending of the foreground upon the background RGB imagery layers to form a combined image (CR,G,B):
CR,G,B=NFR,G,B+(1N)BR,G,B.
Here, a value of N = 1.0 represents a foreground layer that is completely opaque, while N = 0.0 renders the foreground layer completely transparent. Values of N between 0.0 and 1.0 allow for semitransparent blending of the foreground atop the background layer. Since N is a spatially variable quantity (i.e., each pixel is assigned an independent N), it also permits the horizontal blending of layers. Merging imagery from two spatially adjacent, temporally coincident satellite images can be used to produce an expanded-domain composite—this technique was used in GeoColor V1.0 demonstrations to blend GOES-East and GOES-West time-matched imagery to provide seamless coverage of the contiguous U.S. (CONUS) domain.
Vertical blending is not limited to a two-layer stack; it can be extended to multilayered stacks with dynamic transparency factors assigned to each layer. Instead of having just a foreground image layer and a background image layer, consider the case of blending three layers (L1, L2, and L3) as a multilayer “nested vertical stack” using layer-specific transparency factors (N1, N2) for the upper two layers:
C={N1L1+(1N1)[N2L2+(1N2)L3]},
where the (R, G, B) subscripts on C and L have been dropped, and the transparency factors N1,2 are applied uniformly to the RGB components. Comparing Eq. (3) to Eq. (2), we have simply substituted the background layer by a blend between layers L2 and L3. Extrapolating this concept recursively to a Z-layer vertical stack yields a general form:
C={N1L1+(1N1)[N2L2+(1N2){[NZ1LZ1+(1NZ1)LZ]}]}.
Horizontal blending of two such Z-layer recursively nested vertical stacks [e.g., C1 and C2, constructed per Eq. (4)] across an interface (e.g., the day/night terminator, a geographic boundary such as land/sea, or a satellite zenith angle to merge adjacent satellite coverage areas) to form a merged stack, is done via Eq. (2).

All components (Nx, Lx) of Eq. (4) are normalized, such that their combination provides pixel values defined over the range [0.0, 1.0]. Postmultiplication of each Cx RGB component by 255, and rounding decimal values to whole numbers, provides the [0, 255] 8-bit (byte) range of each color component, allowing for their final combination as RGB composite imagery.

Layering strategies

When forming multidimensional blends, it is important to keep in mind the perspective and priorities of the target audience, as well as the physical meaning of the layers themselves. Algorithmically isolated information allows for stacking of layers in arbitrary order (i.e., what constitutes foreground vs background layers), so judicious ordering of the layers is required to ensure a meaningful end result. For example, if we are working with information layers representative of Earth’s surface, the lower troposphere, and the upper troposphere, a natural selection would be to stack the layers in geometric order from top-down. In more abstract blending concepts, it is up to the developer to decide what layer should be regarded as the top-level information, and then stack secondary/tertiary/etc. layers accordingly. For example, a time series of images might involve placing the most recent information atop older layers. Ultimately, the goal is to design an end product that communicates the salient information effectively. The GeoColor application is one instantiation of possibly many, providing context to this concept, and will be the focus for the remaining discussion.

4. GeoColor, version 1.0: Previewing GOES-R/ABI

GeoColor is an application of multidimensional blending that aims to consolidate disparate information and facilitate scene interpretation by the human analyst of geostationary satellite imagery. It does so by displaying several kinds of day and night GOES imagery simultaneously, combining independent imagery layers that each have access to the full RGB color space. The first version of GeoColor (V1.0) was designed to anticipate and demonstrate certain ABI multispectral imagery capabilities to the operational forecasting community. GeoColor V1.0 was first demonstrated by the Naval Research Laboratory, Monterey, in support of their “NexSat” satellite meteorology web page (Miller et al. 2006b; Kuciauskas et al. 2013). Subsequently, it was demonstrated semioperationally to NWS forecasters as part of NOAA’s Satellite Proving Ground (Goodman et al. 2012) in the years leading up to the launch of GOES-R. This section details the mathematical formulation of GeoColor V1.0 using the multidimensional blending technique outlined in section 3.

Figure 3 decomposes the various components of GeoColor V1.0, illustrating how the multivariate composite is built out of vertical and horizontal layer blending. In this example, collected from GOES-East (75°W) and GOES-West (135°W) at 0000 UTC 14 September 2005, the eastern half of the United States is in total darkness, while the western half is still illuminated by afternoon sunlight and dusk twilight. Infrared data, which is available at all times, exists across the entire domain, but is only displayed on the nighttime portion of the scene, where visible data are absent. As detailed below, the daytime and nighttime imagery are layer stacks that are first blended vertically, using scaled and normalized versions of the satellite data as weighting terms. The cosine of the solar zenith angle, μo = cosθo, is then used as a dynamic blending factor between the two (dayside and nightside) stacked layers. This operation results in two (one each for GOES-East and GOES-West) vertically blended, time-matched images that are then blended across a 10°-wide meridional zone centered at 100°W (selected arbitrarily) to produce coverage across the entire contiguous United States and surrounding regions for all times of day/night.

Fig. 3.
Fig. 3.

Piecing together the components of GeoColor V1.0 imagery as a way of previewing GOES-R ABI capabilities. Dynamic transparency fields blend GOES-E and GOES-W (top left) visible imagery atop the MODIS Blue Marble on the dayside, and (top right) infrared imagery atop a nighttime lights mapped background on the nightside. The stacks are blended across the day/night terminator via (middle) cosine-weighted solar zenith angle data valid at the image collection time. (bottom) The final blended product.

Citation: Journal of Atmospheric and Oceanic Technology 37, 3; 10.1175/JTECH-D-19-0134.1

When combining imagery from two or more satellites with different viewing geometries, parallax displacements of the cloud field, in addition to cloud advection/evolution if the observations are not matched identically in time. These issues can be mitigated by enlisting a cloud height retrieval-based parallax correction and imposing temporal matching criteria commensurate with the spatial scale of the imaged domain. These are considered as optional preprocessing steps that can improve the quality of the end product, but are beyond the scope of the current discussion.

The inherent value of true color is the communication of digital data in a way that is familiar to normal color vision, thus providing a form of imagery that is intuitive to the human analyst. True color thus requires minimal user training and can serve readily as a benchmark to interpreting various false color enhancements. In GOES-R program brochures and online advertising, true color imaging was an implied capability of the forthcoming ABI—the first geostationary satellite to do so over the CONUS region since the Applications Technology Satellite 3 (ATS-3) in 1967 (Suomi and Parent 1968). Although, strictly speaking, the construction of true color from the ABI was not possible for lack of a native green band, a satisfactory approximation to this band from the available information was anticipated (Hillger et al. 2011; Miller et al. 2012). Thus, the objective of GeoColor V1.0 was to demonstrate to forecasters and the public how a true color view of Earth and its evolving weather would approximately appear via GOES-R.

The GOES-N/O/P series imagers did not possess the multispectral visible band information needed to render true color. To approximate the capability in GeoColor V1.0, which utilized GOES-N/O/P for its demonstrations, a dynamic blend was constructed between (i) the single available (legacy GOES) visible reflectance, and (ii) a background layer of cloud-cleared composite true color imagery from MODIS (an imager on board the polar-orbiting Terra and Aqua satellites) Blue Marble—a dataset produced by NASA. While this static surface layer could not capture real-time changes to most surface properties (e.g., fire burn scars and floods), certain evolving surface features such as snow cover were represented via the GOES imagery itself. Here, the high visible reflectance of snow cover (detectable by GOES during the day) translated to a correspondingly low transparency (high opacity) in the GeoColor V1.0 daytime visible “cloud” layer, allowing for daytime snow fields to appear atop the Blue Marble background.

To complement the true color imagery and provide a 24-h continuous product that also communicates time of day, GeoColor V1.0 included a blend of IR data atop a nightscape background (crafted to resemble a moonlit surface) of elevation-enhanced terrain and nighttime city light information. The city lights information came from a 2003 Nighttime Lights of the World database (e.g., Elvidge et al. 2001) produced by NOAA’s National Centers for Environmental Information (NCEI). It is a static field based on composited observations from the Operational Linescan System (OLS) on the Defense Meteorological Satellite Program (DMSP) series of polar-orbiting satellites. The OLS is designed with a large dynamic range of broadband (500–900 nm) visible-spectrum light from daylight down to moonlight levels (in-band radiances on the order of 102 W m−2 sr−1 down to 10−5 to W m−2 sr−1). Whereas the lights of this background do not represent disaster impacts (e.g., power outages due to landfalling hurricanes), NWS forecasters use them as reference points for orienting evolving meteorological features to regions of higher population density. In the framework of GeoColor V1.0, the overlapping cloud layers obscure these city lights to an extent determined by the value of the associated layer transparency factors, thus providing an additional level of realism to this city lights layer.

Following the methodology outlined in section 3, GeoColor V1.0 is built from multiple layer components. To combine this information, transparency factors must be assigned to the various overlapping layers. Using Eq. (1), we define these layer transparency factors in shorthand:
NVIS=N(VIS)[0.0,120.0],
NIR=1.0N(IR)[200,280],
NL=N(L)[10,50],
NE=N(E)[0.0,10.0],
Nμo=(N(μo)[0.1,0.3])1.5.
Here, GOES visible channel reflectance data (VIS; provided as values of percent reflectance), are normalized between 0% and 120%. A reflectance value exceeding 100% accounts for three-dimensional scattering effects (e.g., side illumination of cumulus clouds) that would otherwise result in saturation and truncation of cloud brightness in these areas. GOES infrared channel BT data (IR; provided in kelvin) are scaled between 200 and 280. In some implementations of GeoColor V1.0, histograms of the VIS and IR data are computed, and the minimum and maximum bounds are determined by the 1% and 99% percentiles, respectively, to optimize the dynamic range considered in the scaling. Care must be taken when applying this histogram technique to a smaller spatial domain where rapid fluctuations in scene contents over time may occur due to cloud advection. In general, histogram scaling is recommended only for large domains where image-to-image time changes in the bulk VIS and IR distributions due to cloud field variations are small.

The static OLS nighttime city lights data (L; provided as 6-bit values of relative luminosity) are normalized via Eq. (1) over the interval [10, 50]. A 2-arc-min (~3.7 km) resolution U.S. Geological Survey (USGS) terrain elevation database (E; provided in units of km above mean sea level) is normalized between 0.0 and 10.0 km. The cosine of the solar zenith angle (μo; computed from the geolocation and time-of-image collection), is normalized over the interval [0.1, 0.3]. Thus, a gradual fade into nighttime begins on the dayside of the terminator, with bounds selected experimentally based on matching to the observed dimming behavior of VIS imagery near the terminator (approximating twilight effects). The exponential term applied to this term (i.e., Nμo1.5) further approximates the observed nonlinear decrease of VIS reflectance near the terminator.

Note in Eq. (5) that the IR data (NIR) are reversed to render cold features (typically, deep/thick clouds) most opaque for this information layer. Like the scaling of VIS reflectance, the temperature scaling is used as a proxy for cloud optical thickness, as in Turk et al. (2010). This approximation breaks down for low-level (warmer) clouds, resulting in high-transparency biases. This issue is addressed during the forthcoming discussion of GeoColor V2.0 (section 5) via introduction of a nighttime low cloud enhancement information layer.

The daytime background layer, [BD(R), BD(G), BD(B)], comes from the NASA MODIS Blue Marble (M), whose RGB components (MR,G,B) are converted to floating point numbers over the interval [0.0, 1.0] by normalizing over the range [0, 255], via Eq. (2):
BD(R)=N(MR)[0,255],
BD(G)=N(MG)[0,255],
BD(B)=N(MB)[0,255].
The nighttime background layer is itself composed of a three-layer image stack, with nighttime city lights as the topmost layer, a surface elevation middle layer showing terrain relief, and a bluish-purple terrain “nightscape” lower layer (or black for water). These layers are blended per Eq. (3) for each color component:
BN(R)=S{NLLR+(1.0NL)[NET+(1.0NE)RN]},
BN(G)=S{NLLG+(1.0NL)[NET+(1.0NE)GN]},
BN(B)=S{NLLB+(1.0NL)[NET+(1.0NE)BN]},
where premultiplier S is a land/sea surface mask (0 = water, 1 = land; i.e., the nighttime background is rendered black over water surfaces), the city lights image layer (LR, LG, LB) = (1.0, 0.85, 0.0) simulates the amber color of sodium lighting commonly used in the United States. The terrain layer parameter (T) is simply fixed at 1.0, meaning that any terrain at maximum elevation scaling would appear white in this nighttime layer. The maximum elevation scale is specified at a sufficiently high value such that mountainous terrain produces only a subtle grayscale modulation atop the nightscape land surface background layer of (RN, GN, BN) = (0.27, 0.12, 0.06). Whereas the appearance of this nighttime background layer could be modulated based on the lunar phase and lunar zenith angle, following the lunar cycle, it is held constant for GeoColor V1.0.
Once the day- and nightside background layers are constructed via Eqs. (6) and (7), the VIS and IR foreground layers are blended atop them, and then combined in the horizontal dimension across the terminator per the solar zenith angle blending factor (Nμo), described above. The dayside and nightside components are blended as follows:
Ci=R,G,B=Nμo[NVIS×1.0+(1.0NVIS)BD,iD]+(1Nμo)[NIR×1.0+(1.0NIR)BN,i],
where D = 0.75 is a dimming factor applied uniformly to the daytime MODIS Blue Marble background layer RGB components to improve the contrast of overlying VIS-layer features (e.g., clouds occurring over bright backgrounds such as deserts).

Note in Eq. (8) the daytime side (Nμo=1) blends a “white” (R = G = B = 1.0) cloud layer atop the Blue Marble with transparency governed by the magnitude of scaled VIS reflectance. Similarly, the nighttime side (Nμo=0) blends a white cloud layer atop the nighttime background with transparency governed by the magnitude of scaled and reversed IR BT (meaning X = 1.0 − X, such that the coldest values are brightest and most opaque by the transparency factor rules). For consistency of Eq. (8) with the form of Eq. (2), we represent these white cloud layers explicitly as “1.0” multiplier terms against Nvis and NIR.

As a final step, the three terms emerging from Eq. (8) (CR, CG, and CB) are converted to byte values (multiplied by 255, and rounded to the nearest integer) and used as the components a standard RGB three-color composite image. Metadata such as coast lines, political boundaries, latitude/longitude grid, and other information can be drawn upon the imagery as a postprocessing step to assist in end-user interpretation, or omitted if the image is meant to be integrated within a GIS-type interface.

5. GeoColor 2.0: Application to GOES-R ABI

GeoColor V1.0 was demonstrated to Navy meteorology/oceanography (METOC) officers and NWS forecasters on their Advanced Weather Interactive Processing System (AWIPS) operational systems as a way of anticipating the GOES-R ABI. In the pre-GOES-R era, the only observationally based demonstrations of ABI-like capabilities came from low-Earth-orbiting satellite sensors like Terra/Aqua MODIS and eventually the Suomi National Polar-Orbiting Partnership (SNPP) Visible Infrared Imaging Radiometer Suite (VIIRS; Lee et al. 2006; Hillger et al. 2013). These sensors are rich in spectral and spatial resolution but limited in temporal resolution at lower latitudes due to their sun-synchronous polar orbits.

This temporal limitation, combined with 1–2 h latency in many cases, posed a significant limitation for their consumption by operational NWS forecasters in the Proving Ground demonstrations. The only alternative forecaster demonstration product was synthetic ABI imagery, based on running a radiative transfer model with numerical model fields as input (Hillger et al. 2011; Grasso et al. 2018). While useful for analysts and developers of ABI algorithms, the simulated imagery inherits the errors of the model in describing the environmental state (e.g., cloud representation). GeoColor V1.0 aimed to strike a compromise between meeting the operational needs of the forecaster (i.e., high time resolution based on actual observations as needed for monitoring rapidly evolving weather and features not captured well by forecast models) while conveying certain multispectral capabilities of the forthcoming ABI. The blending approach enabled consideration of information not yet available in real time from the geostationary platform.

When the GOES-R series did come online, the static MODIS background layer of GeoColor V1.0 could finally be replaced with real-time updates from ABI. Furthermore, additional/improved layers could be introduced, based on new information and higher ABI spectral resolution. This section details several innovations made to GeoColor V1.0 to incorporate these advanced ABI capabilities, providing a next generation of the product—GeoColor V2.0 that is used widely in research, operations, and public circles today.

GeoColor V2.0 day- and nightside components

Just as with GeoColor V1.0, there are daytime and nighttime components in V2.0, and these are blended across the terminator. In the case of V2.0, the dayside is handled not as a stacked layer between VIS reflectance and a static background, but as a self-contained true color image requiring no vertical blending. In this sense, the application is simpler than V1.0, but the layers themselves are inherently more information rich. The nighttime side is more akin to V1.0, but introduces new layers and leverages the improved fidelity of the ABI spectral bands. The dayside and nightside components of V2.0 are described below.

To frame this discussion, Fig. 4 shows how the dayside and nightside components of GeoColor V2.0 combine to form the final blended imagery product. The quality of the dayside imagery is improved over GeoColor V1.0 (Fig. 3), due to the ABI processing described in section 5a(1). In contrast to GeoColor V1.0 there is asymmetry in terms of the layer depths of the vertical stacks, with the dayside being a single layer of information [Synthetic Hybrid Atmospherically Corrected (SHAC) true color]. Inserting a layer on the dayside (e.g., a lofted dust enhancement) in between the high clouds and the surface for example, would require definition of an independent upper layer (such as Fig. 3a).

Fig. 4.
Fig. 4.

Components of GeoColor V2.0 as applied to GOES-16 ABI at 1217 UTC 1 Jul 2017. Nighttime components for (a) high cloud, (c) low cloud, and (e) surface/lights layers are vertically stacked, and then combined horizontally with (b) daytime SHAC true color using the (d) solar zenith angle as a blending factor to produce (f) the result.

Citation: Journal of Atmospheric and Oceanic Technology 37, 3; 10.1175/JTECH-D-19-0134.1

1) Dayside

The daytime true color imagery layer of GeoColor V2.0 involves three main preprocessing components—(i) an atmospheric correction, (ii) the rendering of a synthetic green band, and (iii) a hybrid spectral tuning of this synthetic green band. These elements, which form the SHAC true color imagery showcased in GOES first-light imagery for GOES-16 and GOES-17 ABI, are described below.

Atmospheric (Rayleigh scattering) correction, applied to the blue, red, and near-infrared bands of the ABI, is adapted from SeaDAS. The correction can also be computed from other standard radiative transfer packages (e.g., Broomhall et al. 2019). At the high satellite zenith angles attained by geostationary observations near the limb of Earth, long atmospheric paths amplify errors in the atmospheric correction. These errors are augmented when intervening high clouds are present, as they effectively reduce the atmospheric pathlength compared to the clear line of sight to the surface that is assumed by the correction algorithm. The optical path reduction can also occur for high solar zenith angles (near the terminator). Both circumstances give rise to overcorrection, imparting a reddening effect to both the limb and near-terminator cloud imagery.

To contend with the optical path-truncation issue, Miller et al. (2016) use the 10.3 μm BT as a proxy for cloud height, truncating the Rayleigh scattering contributions accordingly. When the 10.3 μm BT exceeds 283 K, no adjustments to the atmospheric correction are made; when the temperature falls below 233 K, a 70% reduction (based on calculation of the Rayleigh optical depth from the tropopause to top of atmosphere) to the Rayleigh scatter is applied, and a linear adjustment to the Rayleigh scatter from 70% down to 0% is applied in between 233 and 283 K, following Eq. (1).

The need for a synthetic green band, required along with the blue and red bands for rendering true color imagery, arises from its omission on the ABI (Miller et al. 2006a) in favor of other bands. Despite the general popularity of true color, it was not an NWS requirement, and the decision to omit the enabling green band was made under the constraint of limited space on the ABI focal plane array. Miller et al. (2012) describe a spectral correlation approach that relates MODIS 0.55 μm “green” reflectance to 0.469 μm (blue), 0.645 μm (red), and 0.858 μm (near-infrared), all bands atmospherically corrected in preprocessing. Their analysis, conducted on a diverse assortment of mesoscale scenes, shows absolute differences in real and synthetic green reflectance of 0.1 (on a scale of [0, 100]) and relative differences of 5%–10%, depending on scene properties. The greatest uncertainty in this synthetic green method occurs in shallow-water zones where the correlation between green and near-infrared chlorophyll-a reflectance is small.

The original transition plan was to develop a synthetic green correlative relationship using Himawari-8 AHI (which has a green band), and apply it directly to GOES-R ABI, since the blue, red, and near-infrared bands in common between AHI and ABI are very similar (the instruments were built by the same vendor). However, upon first light of AHI the need for an unanticipated additional step became apparent. The AHI true color imagery revealed a suboptimal response to vegetation (missing the spectrally narrow chlorophyll-a reflectance feature centered on 0.55 μm) in its native 0.51 μm “green” band compared to the 0.55 μm band of MODIS and VIIRS. The spectral misalignment produces a low bias compared to the 0.55 μm reflectance for certain land surface types, with the effects manifested in true color imagery as brown jungles/forests and deserts that were too red.

To provide an improved sensitivity to these surfaces, Miller et al. (2016) introduce a hybrid green method, which blends in a fractional component (~7%) of the AHI near-infrared band (0.86 μm), determined objectively from a minimization of the hybrid band against the 0.55 μm green band of SNPP VIIRS. For ABI GeoColor V2.0, a 0.51 μm synthetic green band is first produced from AHI-derived lookup tables (following Miller et al. 2012), and then the hybrid green step is applied using ABI’s own 0.86 μm band.

Once atmospherically corrected versions of blue, red and synthetic/hybrid green band reflectance are derived, these reflectance values are truncated between values of 2.5% and 120% (0.025, 1.20) and then log10 scaled to replicate the response of the human eye. The log-scaled reflectance data are normalized, via Eq. (1), between [−1.6, 0.176]. True color imagery rendering follows by combining these corrected, scaled, and normalized red, synthetic/hybrid green, and blue bands as a standard RGB composite. These RGB components, comprising the SHAC true color product, are referred to as the daytime layer (DLR,G,B) in equations to follow. The result of this process for GeoColor V2.0 is shown in Fig. 4b.

2) Nightside

The nighttime component of GeoColor V2.0 is like GeoColor V1.0 in the sense that it involves layered background information. However, this component takes advantage of the ABI’s superior resolution, an improved layer that highlights low clouds at night, and higher resolution city lights information derived from the VIIRS Day/Night Band (Elvidge et al. 2017). These layers are discussed individually here.

The topmost layer of the vertical stack in GeoColor V2.0 is an enhanced IR image (Fig. 4a). Instead of the fixed scaling bounds (200 280) used in GeoColor V1.0, GeoColor V2.0 applies a slightly different logic where the maximum bound is held fixed but the minimum bound (IRmin) varies with latitude (lat):
IRmin={200(iflat<30°)200+20×(lat30)/30(if30°lat60°)220(iflat>60°).
The variation accounts, to first order, for deeper/cooler tropopause temperatures in the tropics. The normalized IR layer, NIR, is then defined in the same way as Eq. (5), but now using IRmin instead of a fixed value of 200 K, allowing more consistent saturation for tropopause-level cloud tops.

Low clouds at night in GeoColor V2.0 (Fig. 4c) are represented as an enhancement layer using the ABI 10.3–3.9 μm brightness temperature difference (BTD), which takes advantage of the spectral emissivity differences between these two bands for liquid-phase clouds. The small droplets [e.g., ~8–12 μm droplet size distribution effective radii, as defined by Hansen and Travis (1974)] that often characterize boundary layer clouds are associated with lower emissivity in the shortwave infrared (~4 μm) atmospheric window compared to in the thermal infrared (~11 μm) atmospheric window (e.g., d’Entremont 1986). This disparity yields a small (few degrees kelvin) positive 11–4 μm BTD. The BTD is set to 0.0 for BT(10.3 μm) < 230 K to avoid any spurious false alarms caused by noise in the 3.9 μm that can occur for very cold cloud tops associated with deep convection.

This low cloud at night BTD is normalized per Eq. (1) over the range [1.0, 4.5] for land surfaces, and [0.0, 4.0] over water. As such, a land/sea mask, mapped to the satellite domain, is utilized in this processing step. The larger minimum scaling value used for land surfaces is done to avoid false alarms in the low cloud field, which can arise from certain surface types having intrinsically lower emissivity due to surface minerology. These problematic surfaces (in terms of producing false alarms for low clouds at night) coincide most often with sparsely vegetated or desert landscapes. Account and mitigation for such surface behaviors a priori can be addressed via the Dynamic Enhancement with Background Reduction Algorithm (DEBRA; Miller et al. 2017)—this comes as an additional level of preprocessing to the low cloud at night layer and is neglected here for simplicity. The normalized low cloud at night layer will be referred to as LC in equations to follow. Alternatively, operational “level 2” products related to cloud and aerosol can be enlisted as information layers.

The surface layer of GeoColor V2.0 (Fig. 4e) combines an elevation-enhanced nightscape with nighttime lights information. It is similar to GeoColor V1.0 in terms of the intended result, but differs in construct. The static global nighttime lights information comes from the VIIRS day/night band (DNB) 2015 annual composite produced by NCEI (Elvidge et al. 2017), are map registered to the GOES ABI fixed grid (1 km nominal subsatellite pixel resolution), and are provided as in-band radiance units of nW cm−2 sr−1. Zero-value (nonlights) pixels in the remapped data are set to 1.0 e−10, and a log10 operator is applied. These log-scaled data are subsequently normalized [per Eq. (1)] over the bounds [−0.5, 2.0], yielding a version of NL as in Eq. (5). For surface layer pixel locations where the normalized lights data exceed a threshold of 0.2, a RGB triplet is defined according to the normalized light intensity:
SR,G,B=(NL×D)XR,G,B,
where XR,G,B = (0.75, 1.25, 2.0) correspond to the exponents for RGB, respectively. The constant D (set to 0.8) in Eq. (10) is a dimming factor applied to the normalized data, used to suppress the brightness of the nighttime lights. The power-law scaling of Eq. (10) follows the example of (Miller et al. 2018), approximating the appearance of sodium lighting (yellow/orange) to add a sense of realism to the nighttime lights, but misrepresenting the appearance of LED, mercury, xenon, or other artificial light emission types. For surface layer pixels whose normalized light data values fall below the threshold of 0.2, a nightscape with surface elevation relief is used. The same elevation database used in GeoColor V1.0 is normalized per Eq. (1) over [0, 50 km], yielding NE as in Eq. (5), which is then used to define the nonlights portion of the nighttime surface layer:
SR,G,B=NE+(1NE)×CR,G,B,
where the selection of CR,G,B = (0.06, 0.03, 0.13) imparts a blue/purple color to the nonlight pixels of this layer.
The three information layers (cold cloud IR, low cloud enhancement, and surface) are combined into nighttime layer RGB components (NLR,G,B):
NLi=R,G,B=NIR×1.0+(1.0NIR)[Ai×LC+(1.0LC)Si],
where Ai is an RGB-dependent triplet defined by (R, G, B) = (0.55, 0.75, 0.98), used to impart a light blue coloration to the low cloud at night layer, thereby distinguishing it from the grayscale cold cloud layer.

3) Combining daysides and nightsides

Finally, as in GeoColor V1.0, the day- and nightside layer components are combined using the normalized terminator blending factor, Nμo, introduced in section 4:
Ci=R,G,B=Nμo(DLi)+(1.0Nμo)(NLi).

The final appearance of GeoColor V2.0 is shown in Fig. 5 for a view from GOES-16 on 0002 UTC 14 April 2019. This terminator view highlights many key environmental parameters as the sun sets over a strong midlatitude system over the central United States—lifting thick dust plumes over the southwest, spawning deep convection along the cold front, and intensifying wildland fires in the West as evidenced by copious smoke drifting southward across Baja California.

Fig. 5.
Fig. 5.

Example of GeoColor V2.0 for a terminator scene over the United States as observed by GOES-16 at 0002 UTC 14 Apr 2019.

Citation: Journal of Atmospheric and Oceanic Technology 37, 3; 10.1175/JTECH-D-19-0134.1

Modulation of the information layers provides an additional level of control over the blended imagery. The appendix presents additional examples illustrating how GeoColor imagery layers can be manipulated to yield different effects. These modulations occur as pre- or postprocessing steps applied to the component layers, and their implementation as blended imagery approach follows the same general approach outlined in sections 3 and 4. These examples emphasize the point that contributing layers to the multidimensional blending can involve significant preprocessing to tailor their information content prior to the final step of blending.

NWS forecasters in offices across the United States make practical daily use of GeoColor as a situational awareness tool. It is produced (currently, as an experimental product) in near–real time by NOAA on its GOES Image Viewer (www.star.nesdis.noaa.gov/GOES/conus.php) and is also available on the Cooperative Institute for Research in the Atmosphere (CIRA) Satellite Loop Interactive Data Explorer in Real Time (SLIDER; rammb-slider.cira.colostate.edu; Micke 2018). As such, GeoColor V2.0 is now available in real time to the general public at similar quality to that received by operational forecasters.

4) Application to other satellite imagers

The dynamic blending technique presented in this paper is not limited to the geostationary satellite platform—it can readily be applied to low-Earth-orbiting (or terrestrial based) imagers. As a parting example, we demonstrate application of the technique to Suomi NPP VIIRS imagery for the enhancement of a volcanic ash plume produced by the eruption of Pavlof volcano, located in the Aleutian Range of Alaska, at 1324 UTC 28 March 2016.

Figure 6 shows how multispectral information from VIIRS can be combined using the same blending principles of section 3 to produce an information-rich characterization of a complex scene. Figure 6a shows infrared brightness temperatures, with an enhanced hot spot noted at the location of the Pavlof volcano caldera. Figure 6b shows a multidimensional blend where the volcanic ash plume is shown in red, low clouds and snow-capped peaks along the Aleutian Chain are shown in yellow, and high/cold clouds in blue. Clear-sky surfaces (land and water) appear black in this enhancement.

Fig. 6.
Fig. 6.

Example of multidimensional blending applied to Suomi NPP imagery of the eruption of Pavlof volcano at 1324 UTC 28 Mar 2016. (a) VIIRS band M15 (10.763 μm) brightness temperature with M12 (3.7 μm) overlay, showing a small hot spot at the location of the volcano caldera. (b) A blended composite of three layers, with components as discussed in the text, highlighting a volcanic ash plume in red.

Citation: Journal of Atmospheric and Oceanic Technology 37, 3; 10.1175/JTECH-D-19-0134.1

Construction of Fig. 6 follows the logic of Eq. (3), invoking three information layers, stacked top-down, as follows: (i) the 12.01–10.76 μm “reverse split window” infrared brightness temperature difference for silicate-based volcanic ash detection (e.g., Prata 1989), with normalization between [0 K, 2.5 K] per Eq. (1), (ii) the VIIRS 10.763 μm clean IR window band for enhancing cold cloud tops, normalized over the interval [210 K, 280 K] [and reversed, as in Eq. (5)], and finally, (iii) the VIIRS DNB lunar reflectance, normalized over the interval [15%, 125%]. The lunar reflectance was computed from the DNB measurements of radiance using the lunar irradiance model of Miller and Turner (2009) convolved with the sensor response function of the DNB. The waning gibbous moon provided significant illumination on this night, enabling the DNB to provide significantly more detail of the low cloud and surface features than is possible from IR bands (e.g., Miller et al. 2013).

6. Discussion and conclusions

Multidimensional blending provides a scalable solution for displaying many pieces of information simultaneously, but in a controlled way. In the most general sense, information layers can originate from any form of digital data—observed (measured or derived physical properties), modeled, or prescribed. The engineering of the end product in terms of the content of component layers, the rules attributed to their respective dynamic transparency factors, and the order of their overlap, is controlled entirely by the developer and optimized for the intended effect. It is advised that the design phase engages the target audience (end users) to ensure that the salient information is communicated in the most impactful way.

In an era of increasingly voluminous observational data, including hyperspectral imagery and the next generation of geostationary satellites that may include these data in high temporal resolution as well, tools such as multidimensional blending take on a greater relevancy. Environmental data that have been preprocessed to isolate specific elements of a complex scene can be combined subsequently to form a new imagery that communicates multiple pieces of distilled information simultaneously. This synthesis is useful in time-critical operational situations, where forecasters and decisions makers do not have the time to navigate/mine a large collection of data in arriving at actionable information.

The technique as applied to digital satellite data provides a simple mechanism for transitioning seamlessly between multiple sources of information both in the vertical and horizontal dimensions. User-defined scaling factors provide flexibility in the relative strength of transparency in both dimensions (i.e., providing control over the amount of information retained/lost during the blending operation). This control enables developers to improve the presentation quality of satellite products for decision support and as briefing tools.

This technique is not without its challenges. The layering process can in principle be continued ad infinitum, with arbitrary numbers of nested vertical stacks [Eq. (4)], horizontal blends of such stacks [Eq. (5)], and even restacking/reblending of multiple instantiations of Eq. (5). However, there can come a point in these displays where the diversity of overlapping information can overwhelm the viewer, and defeat the purpose of the technique. Here, the designer must strike a balance between content and quality, which can itself be thought of as a kind of blend—one between art and science.

Another challenge that confronts higher-order RGB applications such as GeoColor, specifically in terms of operational implementation at the NWS, is the need for either native processing capacity at or sufficient bandwidth at nationally distributed Weather Forecast Offices (WFOs). The AWIPS infrastructure used by the NWS is able to handle simple RGB processing (e.g., traditional approaches where linear operations on spectral bands are loaded into each color component) but is ill equipped to implement multidimensional blending, leverage ancillary datasets, or conduct the preprocessing required to customize information layers and take full advantage of the power of higher-order techniques. Considerable time is required to introduce code such updates, and external processing (on site at the WFOs) may be the more tractable short-term solution.

Socializing the concept multidimensional blending with forecasters, acting either as end users or as developers, is a training challenge. Whereas products such as GeoColor are predeveloped, and not intended for postprocess manipulation (although Fig. A.2 shows one such example where value can be added), the potential exists for the design of many other multiparameter blended imagery products. If the aforementioned challenges related to external processing can be overcome, a graphical user interface toolkit for construction of information layers and their layering, based on the constructs of section 4, is entirely possible. A higher-order version of this interface could include prepackaged ancillary datasets (e.g., surface elevation, land–sea mask, sun/sensor geometry), access to logical (if–then–else) constructs, and a “save/implement” capability to apply the customized processing to the native operational data stream, providing a developer’s interface for advanced RGB rendering. With proper training, conducting such development on the operational framework would provide a fast track to maximizing the potential of multidimensional blending while circumventing the inertia of the operational transition process.

Multidimensional blending provides a level of flexibility that is not accessible to conventional RGB composites. The technique is applicable not only to satellite imagery, but any form of imagery. Furthermore, the concept may be applied to quantitative data (e.g., retrievals of a given environmental parameter from different sensors or different algorithms, where the transparency factor in this case may be indexed to retrieval uncertainty). When the information layers are based on scaled versions of quantitative data (e.g., confidence factors, geophysical parameters, etc., derived from physical retrievals), the blended imagery can be displayed via a graphical interface capable of interrogating and analyzing the components. Doing so begins to blur the traditionally understood lines between qualitative imagery and quantitative derived products. To the trained human analyst, capable of drawing context from value-added imagery, combining the “best of both worlds” would provide a powerful new paradigm for working with the new generation of information-rich satellites.

Acknowledgments

Support of the NOAA GOES-R and JPSS Program Offices, the Naval Research Laboratory (Contract N00173-14-G902), and the Office of Naval Research (Contract N00014-16-1-2040) are gratefully acknowledged. The authors claim no financial conflicts of interests. The views, opinions, and findings contained in this article are those of the authors and should not be construed as an official National Oceanic and Atmospheric Administration (NOAA) or U.S. government position, policy, or decision. Satellite data used in this work are freely available from NOAA’s Comprehensive Large Array-Data Stewardship System (CLASS; www.class.noaa.gov), and GOES-R GeoColor products are available in near–real time from NOAA’s GOES Image Viewer (www.star.nesdis.noaa.gov/GOES/conus.php).

APPENDIX

Example Applications of GeoColor

Control over the appearance of multivariate satellite imagery is not limited to the blending of independent layers of information—the individual layers themselves may be preprocessed. The following are examples of such preprocessing to imagery layers that results in advanced display capabilities tailored to specific applications, further demonstrating the versatility of high information content imagery rendering.

a. Capturing the “Great American Eclipse” of 2017

On 21 August 2017, a total solar eclipse crossed the contiguous United States (dubbed the “Great American Eclipse”)—the first coast-to-coast traverse in nearly a century (8 June 1918). Total eclipse began at 1648 UTC, with greatest eclipse (a duration of 2 min 40 s) at 1826 UTC, and ended at 2001 UTC. The path of totality crossed Oregon, southeast through Nebraska, attained maximum in western Kentucky, and exited coastal South Carolina. The historical event offered an opportunity to showcase the power of the relatively new preoperational GOES-16 ABI in terms of its high space, time, and spectral resolution capabilities.

Special preprocessing of GeoColor V2.0 was required for the eclipse, due to the departure from standard assumptions of sunlight strength in the Rayleigh atmosphere correction. Space- and time-resolved (~4 km, 10 s) information on solar obscuration fraction for the eclipse (provided as insolation fraction, 0%–100%) was obtained from the NASA Science Visualization Studio (SVS). Data matched to the ABI scan times were remapped to the ABI native geolocation. The obscuration fractions were then used to suppress the standard (full solar insolation) values. Without this adjustment, the apparent extent of shadow would be too large, and a reddish color (blue light overcorrection) would appear in the regions of partial eclipse.

Figure A1 shows a sequence of the Great American Eclipse as observed by GOES-16 GeoColor V2.0. The moon’s shadow can be seen traversing from the Pacific Northwest and across the southeast United States, and offshore through the eastern Caribbean Sea. Comparison of Figs. A1b and A1d reveal the impact that suppression of solar heating of the surface has on the fair-weather cumulus field across much of the Southeast, as the region transitioned into and out of an eclipse-induced nocturnal environment during the first to last contacts of the penumbral shadow across the total eclipse cycle. GOES-16 GeoColor animations of the eclipse sequence at the scales of full disk, CONUS, and zoomed in to the southeast United States (to illustrate the fair weather cumulus cloud suppression) are provided in the online supplemental material.

Fig. A1.
Fig. A1.

GeoColor V2.0 imagery of the “Great American Eclipse” of 21 Aug 2017 as viewed by GOES-16 ABI, showing progression of the moon’s shadow across the continental United States for selected times of (a) 1627, (b) 1727, (c) 1827, and (d) 1927 UTC. Near the time of greatest eclipse in (c), much of the southeast United States is under the moon’s shadow.

Citation: Journal of Atmospheric and Oceanic Technology 37, 3; 10.1175/JTECH-D-19-0134.1

b. Higher spatial resolution via variance encoding

The native resolution of GOES-R series ABI varies with spectral band. For the visible to near-infrared bands used in the dayside (SHAC) imagery, only the 0.64 μm (red) band is provided at 0.5 km resolution, while the other bands are provided at 1.0 km resolution. When combining the ABI bands, the red band is sampled every other pixel to reduce its resolution to match the other bands, providing a 1 km resolution SHAC image. However, these spatial resolution figures are nadir based; the actual spatial resolution across the ABI field of regard increases with sensor zenith angle (more oblique view) due to the projected footprint of the detector’s instantaneous geometric field of view. For the nadir locations of GOES-E at 75.2°W and GOES-W at 137.2°W, the pixel sizes over the central CONUS are approximately twice that of the nadir values, or effectively 2 km resolution.

To improve the spatial resolution, we applied a sharpening technique applied originally to MODIS imagery (Gumley et al. 2010). The underpinning assumption of this preprocessing is that while the absolute values of spectral reflectance differs among the spectral bands (thus providing color variation), the relative brightness changes for these bands are more tightly correlated. The assumption allows the brightness variation in one band to be applied to the other bands. In this case, the higher resolution red band (0.5 km native resolution) is used to determine the brightness variation between 1.0 and 0.5 km nested pixels, and that variation is applied to the other 1.0 km native resolution bands. We refer to this technique as “variance encoding” and its result is spatial resolution sharpening of the ABI imagery.

To conduct variance encoding on GOES-R ABI, the reflectance variation for a 2 × 2 pixel spatial domain is computed by taking the mean red band reflectance of these four pixels and then computing ratios between the original 0.5 km pixel reflectance values and this mean. This computation forms a 2 × 2 array of reflectance variances, where pixels falling below the mean have values <1.0 and pixels above the mean have values >1.0. The entire 0.5 km red band image is processed in this same way. A more sophisticated mean, based on consideration of a 3 × 3 array (and applying a nonuniform weighted average) that is more representative of the ABI detector footprint, was also evaluated, but it was determined that the simpler 2 × 2 standard mean performs sufficiently for the current application.

After the red band variances are computed, they are used to sharpen each of the coarser resolution bands. For a native 1.0 km resolution image, a new “sharpened” array of twice the row and column dimension is defined (i.e., matching that of the 0.5 km red band). For each 1.0 km native resolution pixel reflectance value, the corresponding red band 2 × 2 array of reflectance variances is extracted and multiplied against it, and the resultant modulated reflectance values are stored in the sharpened array. Thus, the native 1.0 km resolution pixels are treated as the means of the 2 × 2 sharpened array subelements.

The entire native resolution 1.0 km image is processed in this way, for each band, thus fully populating the 0.5 km sharpened array. Once all bands have been sharpened to 0.5 km in this fashion, they are combined with the native 0.5 km red band as an RGB enhancement, per section 5a(1), to form the final 0.5 km version of the SHAC imagery. The spatial sharpening is conducted as a first step in the processing—prior to the atmospheric correction and subsequent synthetic/hybrid green computations, ensuring that any errors associated with those steps are not compounded by the variance encoding. The nighttime imagery of 0.5 km GeoColor V2.0 remains nominally at 2.0 km (i.e., oversampled), as all the IR bands are native 2.0 km and do not offer an opportunity for variance encoding. Additional spatial resolution is realized to the nighttime city lights layer, however, since the native resolution of that database is 15 arc s (~464 m assuming the radius of Earth).

An example of the spatial sharpening, applied to GOES-16 imagery of the central coast of California, is shown in Fig. A2. Close inspection of the two images reveals enhanced detail throughout the image, but particularly notable over the farmlands of the San Joaquin Valley in the center of the image, and the demarcation of the tree line of the Sierra Nevada on the right side of the image. These added details come from the red band, whose variation is encoded to the blue and near-infrared bands used in concert to produce the hybrid green band [section 5a(1)]. An animation of the 0.5 km resolution imagery for GOES-17 coverage of Hawaii on 15 January 2019 is provided in the online supplemental material.

Fig. A2.
Fig. A2.

Example of spatial resolution sharpening of imagery via variance encoding. GOES-16 GeoColor V2.0 imagery of the San Joaquin Valley of central California (1805 UTC 13 Oct 2017), contrasting (a) standard 1 km resolution with (b) 0.5 km spatially sharpened.

Citation: Journal of Atmospheric and Oceanic Technology 37, 3; 10.1175/JTECH-D-19-0134.1

c. “Imprinting” feature enhancements upon imagery layers

It was mentioned in section 3 that in the standard GeoColor application, operations such as transparency factors are applied uniformly to each RGB color component—a “spectrally gray filter” that preserves the color integrity of the imagery layers. However, RGB-dependent operations can in fact modulate the information layer’s native color to useful effect when they are applied judiciously. The operation benefits from anchoring the operation to a well-crafted feature identification algorithm, capable of isolating a specific parameter unambiguously.

The DEBRA technique (Miller et al. 2017), mentioned in section 5a(2), has been applied to isolating lofted mineral dust features in satellite imagery atop complex surface backgrounds. The technique leverages conventional infrared spectral differences for dust detection, coupled with a priori information on how those same signals are expected to appear in land surface background under dust-free conditions (via cloud-cleared background information or via a surface spectral emissivity database), as a way of reducing dust false alarms caused primarily by deserts).

The end result of the DEBRA algorithm is a normalized confidence factor for the presence of lofted dust, expressed in a way that is analogous to the normalized information of Eq. (1) in this paper. The confidence factor can be used as a quantitative masking parameter. In addition, such indices can be communicated visually in dynamic blending techniques such as developed in the current paper, via a process we refer to as “feature imprinting.”

The concept of feature imprinting is illustrated on GeoColor V2.0. Figure A3 shows GOES-16 imagery collected at 2156 UTC 10 April 2019 during a period where significant dust was being lofted over the southwest United States and parts of the Mojave Desert in association with a midlatitude storm over the center of the country. A commonly used technique for the detection of lofted mineral dust is the spectral difference between narrow IR bands brightness temperatures (TB) measurements near 12 and 10 μm, which take advantage a scattering extinction feature of quartz found in most species of mineral dust (Wald et al. 1998). When defined as [TB(12)–TB(10)], lofted dust will take on small positive values depending on the optical thickness and altitude of the dust layer. For the illustrative purposes of this feature enhancement imprinting example, we have normalized this IR difference, Ndust, over the range [0, 4]. The feature is then imprinted upon the combined RGB components of Eq. (13). To impart a yellow tonality to the imprint, we augment the red and green color components of the GeoColor V2.0 image, while suppressing the blue color component:
CR,G=CR,G+Ndust,
CB=CBNdust.
Since the information provided by Ndust is confined to a subset of the scene where high confidence in dust exists, the original components CR,G,B are modulated in a imprinted portion of the GeoColor image, leaving other areas unaffected. The modified components are truncated to remain within the [0, 255] bounds used to produce the final RGB composite imagery shown in Fig. A3.
Fig. A3.
Fig. A3.

Example of feature imprinting upon GOES-16 ABI GeoColor V2.0 imagery from 2156 UTC 10 Apr 2019. A lofted dust signal, based on the spectral difference between the 10 and 12 μm, is normalized and used to modulate the RGB color components of the GeoColor image in a nonuniform way, imparting a yellow tonality to regions of high dust confidence (see text for details).

Citation: Journal of Atmospheric and Oceanic Technology 37, 3; 10.1175/JTECH-D-19-0134.1

In a similar way, the DEBRA dust product is communicated as value-added imagery by imprinting the dust confidence factor upon conventional grayscale visible and infrared imagery, enabling coloration of targeted areas of the image that provide significant isolation of the dust features. This concept of feature imprinting offers yet another method of introducing quantitative information to satellite imagery in a way that preserves the meteorological context while highlighting features of interest to the human analyst.

REFERENCES

  • Bah, M. K., M. M. Gunshor, and T. J. Schmit, 2018: Generation of GOES-16 true color imagery without a green band. Earth Space Sci., 5, 549558, https://doi.org/10.1029/2018EA000379.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bessho, K., and Coauthors, 2016: An introduction to Himawari-8/9—Japan’s new-generation geostationary meteorological satellites. J. Meteor. Soc. Japan, 94, 151183, https://doi.org/10.2151/jmsj.2016-009.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Broomhall, M., L. Majewski, V. Villani, I. Grant, and S. D. Miller, 2019: Correcting Himawari-8 Advanced Himawari Imager data for the production of vivid true-color imagery. J. Atmos. Oceanic Technol., 36, 427442, https://doi.org/10.1175/JTECH-D-18-0060.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • d’Entremont, R. P., 1986: Low- and midlevel cloud analysis using nighttime multispectral imagery. J. Climate Appl. Meteor., 25, 18531869, https://doi.org/10.1175/1520-0450(1986)025<1853:LAMCAU>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • d’Entremont, R. P., and L. W. Thomason, 1987: Interpreting meteorological satellite images using a color composite technique. Bull. Amer. Meteor. Soc., 68, 762768, https://doi.org/10.1175/1520-0477(1987)068<0762:IMSIUA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Elvidge, C. D., M. L. Imhoff, K. E. Baugh, V. R. Hobson, I. Nelson, J. Safran, J. B. Dietz, and B. T. Tuttle, 2001: Night-Time Lights of the World: 1994–1995. ISPRS J. Photogramm. Remote Sens., 56, 8199, https://doi.org/10.1016/S0924-2716(01)00040-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Elvidge, C. D., K. Baugh, M. Zhizhin, F. C. Hsu, and T. Ghosh, 2017: VIIRS night-time lights. Int. J. Remote Sens., 38, 58605879, https://doi.org/10.1080/01431161.2017.1342050.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Goodman, S. J., and Coauthors, 2012: The GOES-R Proving Ground: Accelerating user readiness for the next generation geostationary environmental satellite system. Bull. Amer. Meteor. Soc., 93, 10291040, https://doi.org/10.1175/BAMS-D-11-00175.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grasso, L., D. Lindsey, Y.-J. Noh, C. O’Dell, T.-C. Wu, and F. Kong, 2018: Improvements to cloud-top brightness temperatures computed from the CRTM at 3.9 μm. Mon. Wea. Rev., 146, 39273944, https://doi.org/10.1175/MWR-D-17-0342.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gumley, L., J. Descloitres, and J. Schmaltz, 2010: Creating reprojected true color MODIS images: A tutorial. University of Wisconsin–Madison Space Science and Engineering Center Rep., 17 pp., http://gis-lab.info/docs/modis_true_color.pdf.

  • Hansen, J. E., and L. D. Travis, 1974: Light scattering in planetary atmospheres. Space Sci. Rev., 16, 527610, https://doi.org/10.1007/BF00168069.

  • Hillger, D. W., L. D. Grasso, S. D. Miller, R. L. Brummer, and R. T. DeMaria, 2011: Synthetic Advanced Baseline Imager true-color imagery. J. Appl. Remote Sens., 5, 053520, https://doi.org/10.1117/1.3576112.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hillger, D. W., and Coauthors, 2013: First-light imagery from Suomi NPP VIIRS. Bull. Amer. Meteor. Soc., 94, 10191029, https://doi.org/10.1175/BAMS-D-12-00097.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kreit, E., L. M. Mäthger, R. T. Hanlon, P. B. Dennis, R. R. Naik, E. Forsythe, and J. Heikenfeld, 2013: Biological versus electronic adaptive coloration: How can one inform the other? J. Roy. Soc. Interface, 10, 20120601, https://doi.org/10.1098/RSIF.2012.0601.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kuciauskas, A., J. Solbrig, T. Lee, and J. Hawkins, 2013: Next-generation satellite meteorology technology unveiled. Bull. Amer. Meteor. Soc., 94, 18241825, https://doi.org/10.1175/BAMS-D-13-00007.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lee, T. F., S. D. Miller, F. J. Turk, C. Schueler, R. Julian, S. Deyo, P. Dills, and S. Wang, 2006: The NPOESS VIIRS day/night visible sensor. Bull. Amer. Meteor. Soc., 87, 191200, https://doi.org/10.1175/BAMS-87-2-191.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Micke, K., 2018: Every pixel of GOES-17 imagery at your fingertips. Bull. Amer. Meteor. Soc., 99, 22172219, https://doi.org/10.1175/BAMS-D-17-0272.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., and R. E. Turner, 2009: A dynamic lunar spectral irradiance dataset for NPOESS/VIIRS day/night band nighttime environmental applications. IEEE Trans. Geosci. Remote Sens., 47, 23162329, https://doi.org/10.1109/TGRS.2009.2012696.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., F. J. Turk, T. F. Lee, J. D. Hawkins, C. S. Velden, C. C. Schmidt, E. M. Prins, and S. H. D. Haddock, 2006a: The origin of sensors: Evolutionary considerations for next-generation environmental satellite systems. 14th Conf. on Satellite Meteorology and Oceanography, Atlanta, GA, Amer. Meteor. Soc., 10.1, https://ams.confex.com/ams/Annual2006/techprogram/paper_104781.htm.

  • Miller, S. D., and Coauthors, 2006b: NexSat: Previewing NPOESS/VIIRS imagery capabilities. Bull. Amer. Meteor. Soc., 87, 433446, https://doi.org/10.1175/BAMS-87-4-433.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., C. Schmidt, T. Schmit, and D. Hillger, 2012: A case for natural colour imagery from geostationary satellites, and an approximation for the GOES-R ABI. Int. J. Remote Sens., 33, 39994028, https://doi.org/10.1080/01431161.2011.637529.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., and Coauthors, 2013: Illuminating the capabilities of the Suomi National Polar-Orbiting Partnership (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) day/night band. Remote Sens., 5, 67176766, https://doi.org/10.3390/rs5126717.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., T. L. Schmit, C. J. Seaman, D. T. Lindsey, M. M. Gunshor, R. A. Kors, Y. Sumida, and D. W. Hillger, 2016: A sight for sore eyes: The return of true color imagery to geostationary satellites. Bull. Amer. Meteor. Soc., 97, 18031816, https://doi.org/10.1175/BAMS-D-15-00154.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., R. L. Bankert, J. E. Solbrig, J. M. Forsythe, and Y.-J. Noh, 2017: A dynamic enhancement with background reduction algorithm: Overview and application to satellite-based dust storm detection. J. Geophys. Res. Atmos., 122, 12 93812 959, https://doi.org/10.1002/2017JD027365.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., W. C. Straka III, J. Yue, C. J. Seaman, S. Xu, C. D. Elvidge, L. Hoffman, and S. I. Azeem, 2018: The dark side of Hurricane Matthew—Unique perspectives from the day/night band. Bull. Amer. Meteor. Soc., 99, 25612574, https://doi.org/10.1175/BAMS-D-17-0097.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Murata, H., K. Saitoh, and Y. Sumida, 2018: True color imagery rendering for Himawari-8 with a color reproduction approach based on the CIE XYZ color system. J. Meteor. Soc. Japan, 96B, 211238, https://doi.org/10.2151/jmsj.2018-049.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Park, J. H., J. Y. Bok, O. Han, H. Lim, and D. W. Chung, 2016: Development of radiometric calibration system for GEO-KOMPSAT-2 AMI. Proc. SpaceOps 2016 Conf., Daejeon, Korea, Korea Aerospace Research Institute.

    • Crossref
    • Export Citation
  • Prata, A. J., 1989: Infrared radiative transfer calculations for volcanic ash clouds. Geophys. Res. Lett., 16, 12931296, https://doi.org/10.1029/GL016i011p01293.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schmit, T. J., P. Griffith, M. M. Gunshor, J. M. Daniels, S. J. Goodman, and W. J. Lebair, 2017: A closer look at the ABI on the GOES-R series. Bull. Amer. Meteor. Soc., 98, 681698, https://doi.org/10.1175/BAMS-D-15-00230.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schmit, T. J., S. S. Lindstrom, J. J. Gerth, and M. M. Bunshor, 2018: Applications of the 16 spectral bands on the Advanced Baseline Imager (ABI). J. Oper. Meteor., 6, 3346, https://doi.org/10.15191/nwajom.2018.0604.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Setvák, M., K. Bedka, D. T. Lindsey, A. Sokol, Z. Charvát, J. Šťástka, and P. K. Wang, 2013: A-Train observations of deep convective storm tops. Atmos. Res., 123, 229248, https://doi.org/10.1016/j.atmosres.2012.06.020.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Suomi, V. E., and R. J. Parent, 1968: A color view of planet Earth. Bull. Amer. Meteor. Soc., 49, 7475, https://doi.org/10.1175/1520-0477-49.2.74.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Transon, J., R. D’Andrimont, A. Maugnard, and P. Defourny, 2018: Survey of hyperspectral Earth observation applications from space in the Sentinel-2 context. Remote Sens., 10, 157, https://doi.org/10.3390/rs10020157.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Turk, F. J., S. D. Miller, and C. Castello, 2010: A dynamic global cloud layer for virtual globes. Int. J. Remote Sens., 31, 18971914, https://doi.org/10.1080/01431160902926657.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wald, A. E., Y. J. Kaufman, D. Tanré, and B.-C. Gao, 1998: Daytime and nighttime detection of mineral dust over desert using infrared spectral contrast. J. Geophys. Res., 103, 32 30732 313, https://doi.org/10.1029/98JD01454.

    • Crossref
    • Search Google Scholar
    • Export Citation

Supplementary Materials

Save
  • Bah, M. K., M. M. Gunshor, and T. J. Schmit, 2018: Generation of GOES-16 true color imagery without a green band. Earth Space Sci., 5, 549558, https://doi.org/10.1029/2018EA000379.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Bessho, K., and Coauthors, 2016: An introduction to Himawari-8/9—Japan’s new-generation geostationary meteorological satellites. J. Meteor. Soc. Japan, 94, 151183, https://doi.org/10.2151/jmsj.2016-009.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Broomhall, M., L. Majewski, V. Villani, I. Grant, and S. D. Miller, 2019: Correcting Himawari-8 Advanced Himawari Imager data for the production of vivid true-color imagery. J. Atmos. Oceanic Technol., 36, 427442, https://doi.org/10.1175/JTECH-D-18-0060.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • d’Entremont, R. P., 1986: Low- and midlevel cloud analysis using nighttime multispectral imagery. J. Climate Appl. Meteor., 25, 18531869, https://doi.org/10.1175/1520-0450(1986)025<1853:LAMCAU>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • d’Entremont, R. P., and L. W. Thomason, 1987: Interpreting meteorological satellite images using a color composite technique. Bull. Amer. Meteor. Soc., 68, 762768, https://doi.org/10.1175/1520-0477(1987)068<0762:IMSIUA>2.0.CO;2.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Elvidge, C. D., M. L. Imhoff, K. E. Baugh, V. R. Hobson, I. Nelson, J. Safran, J. B. Dietz, and B. T. Tuttle, 2001: Night-Time Lights of the World: 1994–1995. ISPRS J. Photogramm. Remote Sens., 56, 8199, https://doi.org/10.1016/S0924-2716(01)00040-5.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Elvidge, C. D., K. Baugh, M. Zhizhin, F. C. Hsu, and T. Ghosh, 2017: VIIRS night-time lights. Int. J. Remote Sens., 38, 58605879, https://doi.org/10.1080/01431161.2017.1342050.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Goodman, S. J., and Coauthors, 2012: The GOES-R Proving Ground: Accelerating user readiness for the next generation geostationary environmental satellite system. Bull. Amer. Meteor. Soc., 93, 10291040, https://doi.org/10.1175/BAMS-D-11-00175.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Grasso, L., D. Lindsey, Y.-J. Noh, C. O’Dell, T.-C. Wu, and F. Kong, 2018: Improvements to cloud-top brightness temperatures computed from the CRTM at 3.9 μm. Mon. Wea. Rev., 146, 39273944, https://doi.org/10.1175/MWR-D-17-0342.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Gumley, L., J. Descloitres, and J. Schmaltz, 2010: Creating reprojected true color MODIS images: A tutorial. University of Wisconsin–Madison Space Science and Engineering Center Rep., 17 pp., http://gis-lab.info/docs/modis_true_color.pdf.

  • Hansen, J. E., and L. D. Travis, 1974: Light scattering in planetary atmospheres. Space Sci. Rev., 16, 527610, https://doi.org/10.1007/BF00168069.

  • Hillger, D. W., L. D. Grasso, S. D. Miller, R. L. Brummer, and R. T. DeMaria, 2011: Synthetic Advanced Baseline Imager true-color imagery. J. Appl. Remote Sens., 5, 053520, https://doi.org/10.1117/1.3576112.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Hillger, D. W., and Coauthors, 2013: First-light imagery from Suomi NPP VIIRS. Bull. Amer. Meteor. Soc., 94, 10191029, https://doi.org/10.1175/BAMS-D-12-00097.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kreit, E., L. M. Mäthger, R. T. Hanlon, P. B. Dennis, R. R. Naik, E. Forsythe, and J. Heikenfeld, 2013: Biological versus electronic adaptive coloration: How can one inform the other? J. Roy. Soc. Interface, 10, 20120601, https://doi.org/10.1098/RSIF.2012.0601.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Kuciauskas, A., J. Solbrig, T. Lee, and J. Hawkins, 2013: Next-generation satellite meteorology technology unveiled. Bull. Amer. Meteor. Soc., 94, 18241825, https://doi.org/10.1175/BAMS-D-13-00007.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Lee, T. F., S. D. Miller, F. J. Turk, C. Schueler, R. Julian, S. Deyo, P. Dills, and S. Wang, 2006: The NPOESS VIIRS day/night visible sensor. Bull. Amer. Meteor. Soc., 87, 191200, https://doi.org/10.1175/BAMS-87-2-191.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Micke, K., 2018: Every pixel of GOES-17 imagery at your fingertips. Bull. Amer. Meteor. Soc., 99, 22172219, https://doi.org/10.1175/BAMS-D-17-0272.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., and R. E. Turner, 2009: A dynamic lunar spectral irradiance dataset for NPOESS/VIIRS day/night band nighttime environmental applications. IEEE Trans. Geosci. Remote Sens., 47, 23162329, https://doi.org/10.1109/TGRS.2009.2012696.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., F. J. Turk, T. F. Lee, J. D. Hawkins, C. S. Velden, C. C. Schmidt, E. M. Prins, and S. H. D. Haddock, 2006a: The origin of sensors: Evolutionary considerations for next-generation environmental satellite systems. 14th Conf. on Satellite Meteorology and Oceanography, Atlanta, GA, Amer. Meteor. Soc., 10.1, https://ams.confex.com/ams/Annual2006/techprogram/paper_104781.htm.

  • Miller, S. D., and Coauthors, 2006b: NexSat: Previewing NPOESS/VIIRS imagery capabilities. Bull. Amer. Meteor. Soc., 87, 433446, https://doi.org/10.1175/BAMS-87-4-433.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., C. Schmidt, T. Schmit, and D. Hillger, 2012: A case for natural colour imagery from geostationary satellites, and an approximation for the GOES-R ABI. Int. J. Remote Sens., 33, 39994028, https://doi.org/10.1080/01431161.2011.637529.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., and Coauthors, 2013: Illuminating the capabilities of the Suomi National Polar-Orbiting Partnership (NPP) Visible Infrared Imaging Radiometer Suite (VIIRS) day/night band. Remote Sens., 5, 67176766, https://doi.org/10.3390/rs5126717.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., T. L. Schmit, C. J. Seaman, D. T. Lindsey, M. M. Gunshor, R. A. Kors, Y. Sumida, and D. W. Hillger, 2016: A sight for sore eyes: The return of true color imagery to geostationary satellites. Bull. Amer. Meteor. Soc., 97, 18031816, https://doi.org/10.1175/BAMS-D-15-00154.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., R. L. Bankert, J. E. Solbrig, J. M. Forsythe, and Y.-J. Noh, 2017: A dynamic enhancement with background reduction algorithm: Overview and application to satellite-based dust storm detection. J. Geophys. Res. Atmos., 122, 12 93812 959, https://doi.org/10.1002/2017JD027365.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Miller, S. D., W. C. Straka III, J. Yue, C. J. Seaman, S. Xu, C. D. Elvidge, L. Hoffman, and S. I. Azeem, 2018: The dark side of Hurricane Matthew—Unique perspectives from the day/night band. Bull. Amer. Meteor. Soc., 99, 25612574, https://doi.org/10.1175/BAMS-D-17-0097.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Murata, H., K. Saitoh, and Y. Sumida, 2018: True color imagery rendering for Himawari-8 with a color reproduction approach based on the CIE XYZ color system. J. Meteor. Soc. Japan, 96B, 211238, https://doi.org/10.2151/jmsj.2018-049.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Park, J. H., J. Y. Bok, O. Han, H. Lim, and D. W. Chung, 2016: Development of radiometric calibration system for GEO-KOMPSAT-2 AMI. Proc. SpaceOps 2016 Conf., Daejeon, Korea, Korea Aerospace Research Institute.

    • Crossref
    • Export Citation
  • Prata, A. J., 1989: Infrared radiative transfer calculations for volcanic ash clouds. Geophys. Res. Lett., 16, 12931296, https://doi.org/10.1029/GL016i011p01293.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schmit, T. J., P. Griffith, M. M. Gunshor, J. M. Daniels, S. J. Goodman, and W. J. Lebair, 2017: A closer look at the ABI on the GOES-R series. Bull. Amer. Meteor. Soc., 98, 681698, https://doi.org/10.1175/BAMS-D-15-00230.1.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Schmit, T. J., S. S. Lindstrom, J. J. Gerth, and M. M. Bunshor, 2018: Applications of the 16 spectral bands on the Advanced Baseline Imager (ABI). J. Oper. Meteor., 6, 3346, https://doi.org/10.15191/nwajom.2018.0604.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Setvák, M., K. Bedka, D. T. Lindsey, A. Sokol, Z. Charvát, J. Šťástka, and P. K. Wang, 2013: A-Train observations of deep convective storm tops. Atmos. Res., 123, 229248, https://doi.org/10.1016/j.atmosres.2012.06.020.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Suomi, V. E., and R. J. Parent, 1968: A color view of planet Earth. Bull. Amer. Meteor. Soc., 49, 7475, https://doi.org/10.1175/1520-0477-49.2.74.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Transon, J., R. D’Andrimont, A. Maugnard, and P. Defourny, 2018: Survey of hyperspectral Earth observation applications from space in the Sentinel-2 context. Remote Sens., 10, 157, https://doi.org/10.3390/rs10020157.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Turk, F. J., S. D. Miller, and C. Castello, 2010: A dynamic global cloud layer for virtual globes. Int. J. Remote Sens., 31, 18971914, https://doi.org/10.1080/01431160902926657.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Wald, A. E., Y. J. Kaufman, D. Tanré, and B.-C. Gao, 1998: Daytime and nighttime detection of mineral dust over desert using infrared spectral contrast. J. Geophys. Res., 103, 32 30732 313, https://doi.org/10.1029/98JD01454.

    • Crossref
    • Search Google Scholar
    • Export Citation
  • Fig. 1.

    Example of dynamic imagery blending via the “sandwich product” for a GOES-16 ABI image of thunderstorms at 2319 UTC 6 Apr 2018. (a) Color-enhanced infrared imagery are superimposed upon (b) visible reflectance imagery at a spatially uniform transparency factor of 70% to yield (c) the blended image.

  • Fig. 2.

    Example of the dynamic transparency factor for information displayed in Google Earth, following the technique of Turk et al. (2010). Radar-indicated precipitation (rainbow color) with zero transparency for valid data and a variable transparency infrared-based cloud field (white/gray) overlay a static true color surface background.

  • Fig. 3.

    Piecing together the components of GeoColor V1.0 imagery as a way of previewing GOES-R ABI capabilities. Dynamic transparency fields blend GOES-E and GOES-W (top left) visible imagery atop the MODIS Blue Marble on the dayside, and (top right) infrared imagery atop a nighttime lights mapped background on the nightside. The stacks are blended across the day/night terminator via (middle) cosine-weighted solar zenith angle data valid at the image collection time. (bottom) The final blended product.

  • Fig. 4.

    Components of GeoColor V2.0 as applied to GOES-16 ABI at 1217 UTC 1 Jul 2017. Nighttime components for (a) high cloud, (c) low cloud, and (e) surface/lights layers are vertically stacked, and then combined horizontally with (b) daytime SHAC true color using the (d) solar zenith angle as a blending factor to produce (f) the result.

  • Fig. 5.

    Example of GeoColor V2.0 for a terminator scene over the United States as observed by GOES-16 at 0002 UTC 14 Apr 2019.

  • Fig. 6.

    Example of multidimensional blending applied to Suomi NPP imagery of the eruption of Pavlof volcano at 1324 UTC 28 Mar 2016. (a) VIIRS band M15 (10.763 μm) brightness temperature with M12 (3.7 μm) overlay, showing a small hot spot at the location of the volcano caldera. (b) A blended composite of three layers, with components as discussed in the text, highlighting a volcanic ash plume in red.

  • Fig. A1.

    GeoColor V2.0 imagery of the “Great American Eclipse” of 21 Aug 2017 as viewed by GOES-16 ABI, showing progression of the moon’s shadow across the continental United States for selected times of (a) 1627, (b) 1727, (c) 1827, and (d) 1927 UTC. Near the time of greatest eclipse in (c), much of the southeast United States is under the moon’s shadow.

  • Fig. A2.

    Example of spatial resolution sharpening of imagery via variance encoding. GOES-16 GeoColor V2.0 imagery of the San Joaquin Valley of central California (1805 UTC 13 Oct 2017), contrasting (a) standard 1 km resolution with (b) 0.5 km spatially sharpened.

  • Fig. A3.

    Example of feature imprinting upon GOES-16 ABI GeoColor V2.0 imagery from 2156 UTC 10 Apr 2019. A lofted dust signal, based on the spectral difference between the 10 and 12 μm, is normalized and used to modulate the RGB color components of the GeoColor image in a nonuniform way, imparting a yellow tonality to regions of high dust confidence (see text for details).

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 84361 13335 1284
PDF Downloads 10258 2122 137