A new and innovative cloud analysis technique has been developed that exploits the temporal information content of geostationary satellite imagery. The algorithm is designed to identify new cloud development and moving cloud systems by comparing the relative change in visible bidirectional reflectance and infrared brightness temperature of collocated pixels taken from consecutive image pairs. First, pixels whose temporal changes exceed predicted values for the cloud-free background are identified as cloudy. Then, based on this information, dynamic visible and infrared cloud thresholds are specified for local analysis regions within the image. Algorithm attributes include minimal reliance on supporting databases such as expected clear-scene bidirectional reflectances and land surface skin temperatures, applicability to all current geostationary environmental satellites [the algorithm processes data from the Geostationary Meteorological Satellite (GMS), Meteosat, and Geostationary Operational Environmental Satellites (GOES)], and a reliable tendency not to overanalyze cloud. Since 1 April 2002, temporal differencing has been running in checkout mode, alongside the U.S. Air Force (USAF) Real-Time Nephanalysis (RTNEPH), as the geostationary algorithm segment of the next-generation Cloud Depiction and Forecast System (CDFS) model. Initial experiences with the model to date (through June 2002) are outlined and discussed. By the end of summer 2002, CDFS will replace RTNEPH as the USAF's real-time operational global cloud model, the only of its kind in the world.
A next-generation real-time global cloud analysis model has been developed that processes global multispectral, multisensor environmental satellite data from both polar-orbiting and geostationary platforms. The model was developed through the Support of Environmental Requirements for Cloud Analysis and Archive (SERCAA) program funded under the Strategic Environmental Research and Development Program of the U.S. Departments of Energy and Defense. After a 3-month checkout period, plans call for the SERCAA model to replace the current U.S. Air Force (USAF) operational cloud analysis model, the Real-Time Nephanalysis (RTNEPH), in the summer of 2002. In the coming years, archived cloud analyses from this model will offer an alternative global cloud record for climate studies. As such, it is important to document and describe the retrieval technique in order to better understand and use its cloud products. The RTNEPH operates on two-channel data, generally obtained from Defense Meteorological Satellite Program (DMSP) satellites (Hamill et al., 1992). In contrast, SERCAA cloud algorithms analyze DMSP data along with five-channel data from the National Oceanic and Atmospheric Administration's (NOAA) Television Infrared Observation Satellite (TIROS), five-channel data from Geostationary Operational Environmental Satellites (GOES), and three-channel data from the Japanese Geostationary Meteorological Satellite (GMS) and European Meteosat geostationary satellites (Isaacs et al., 1994). Table 1 summarizes the satellite data sources used by the SERCAA global cloud model. The fundamental SERCAA cloud analysis (often called nephanalysis) paradigm is to analyze each data source separately using multiple processing algorithms, each unique for a given sensor. As data are received from each satellite they are immediately analyzed through their respective cloud algorithm and the results are stored in a temporary database. Nephanalysis programs are designed uniquely to exploit the individual strengths of each sensor system. The resulting multisensor cloud analyses are then integrated into a single optimal analysis using a combined rules-based–optimal-interpolation approach that evaluates the relative accuracy and timeliness of each input analysis (Grassotti et al., 1994). Global analysis integration is performed once per hour by sampling the most recently analyzed data from each satellite source.
During integration, the cloud-analysis fields of each satellite source are evaluated for accuracy and timeliness. Accuracy characteristics vary for each data source and for each analysis algorithm. DMSP data have high spatial resolution with a footprint that is constant across the scan (i.e., spatial resolution does not degrade away from the satellite subpoint), but because it is a dual-threshold technique that operates on visible and/or infrared data, the analysis algorithm is heavily dependent on accurate a priori characterizations of the cloud-free background. The NOAA TIROS cloud algorithm capitalizes on the multispectral characteristics of the Advanced Very High Resolution Radiometer (AVHRR) to more accurately detect low cloud, fog, and thin cirrus with a minimal reliance on supporting background data. Combined analyses of DMSP and TIROS polar-orbiting data are useful for characterizing stationary or persistent cloud systems (e.g., nighttime fog, marine stratus, stratiform cloud associated with slowly moving fronts). However, their accuracies age quickly when analyzing convective or rapidly progressive storm systems. Geostationary data are well suited for detection and subsequent analysis of such clouds.
This paper describes a new temporal-differencing technique that was developed for SERCAA to exploit the high temporal information content of geostationary satellite data. Temporal differencing is designed to focus primarily on detection of rapidly developing or moving cloud features and, as such, complements the polar-orbiting algorithms described above. The algorithm is applicable to data from any geostationary satellite with at least one visible or one thermal infrared channel and has minimal reliance on supporting land surface databases. Local cloud–no-cloud thresholds are generated dynamically and therefore adapt to observed changes in background and/or climatic conditions, resulting in a robust cloud detection and analysis technique for operations and climate studies.
The geostationary algorithm implements a hybrid, three-step procedure: 1) temporal differencing, 2) dynamic thresholding, and 3) spectral discrimination. Each step employs a different temporal, spatial, or spectral cloud signature and is therefore sensitive to different types of cloud. Individual tests alone are not assumed to identify all clouds; rather, the results of all tests taken together are required to fully classify an image scene. The temporal-differencing step identifies cloud features that change position and/or intensity over short periods of time. The time rate of change of collocated infrared brightness temperatures and visible counts (when available) taken from two sequential geostationary satellite images are used to discriminate these cloud features from the background. It is important to note that the term “background” here refers to either the underlying Earth's surface or other lower cloud decks. Pixel locations that cool (brighten) at a rate greater than the expected diurnal rate for the local cloud-free surface are assumed to have had cloud either form or advect into them since the earlier image time.
Initially temporal differencing generally identifies only a fraction of the clouds in an image scene, but with certainty. The dynamic-thresholding step exploits information on the radiative characteristics of those clouds that have been identified through temporal differencing by identifying nearby clouds with “similar” radiative characteristics. The image is segmented into a regular grid and, for each grid cell, local cloud radiative characteristics are retrieved to dynamically define local infrared brightness-temperature (and, when available, visible count) cloud–no-cloud thresholds. Separate cloud thresholds are calculated for each grid cell and all pixels within the cell that have a lower brightness temperature (and/or higher visible count) than the dynamic threshold(s) are classified as cloudy.
After the first two steps it is possible that some stationary clouds, that is, clouds whose spatial and radiometric attributes have not changed over the analysis time period, remain undetected. These clouds are targeted wherever possible using single-channel and multispectral discriminant techniques that exploit bispectral DMSP, GMS, and Meteosat data and multispectral geostationary GOES/TIROS imager data, as outlined in section 5.
3. Algorithm description
The temporal-differencing algorithm operates on pairs of collocated geostationary satellite images taken over the same region of interest from sequential scans, usually 1 h apart. Cloud detection is performed on a pixel-by-pixel basis by comparing satellite-measured changes to the corresponding clear-scene values over the same period. The algorithm is applied simultaneously to both visible and longwave thermal infrared (TIR) data when both are available; at night, the algorithm is a single-channel TIR technique. Databases of visible background and surface temperature fields are maintained by the algorithm to support computations of the time rate of change of the clear-scene values. The visible background database is built from cloud-free satellite observations taken at the same local time over the previous 10–14 days. Diurnal surface temperature variations are estimated using a global land surface energy balance model, described in more detail in section 3.1.2.
3.1 Temporal differencing
Visible and infrared image processing follow the same procedure. Respective changes in the satellite-observed and clear-scene background parameters are calculated and then compared to determine if they deviate by an amount greater than a preset cloud-detection threshold. For two collocated geostationary satellite images separated by some time period Δt and observed at times t and t − Δt, the change in visible counts for a given pixel is defined as
where Vt−Δt is the visible count measured at time t − Δt and Vt is for analysis time t. In SERCAA the time separation is set at Δt = 1 h, but can be reduced (or increased) to whatever temporal resolution the available geostationary satellite data will support. Experience has shown, however, that cloud-detection accuracies degrade noticeably if the time period is either too short (say, less than 30 min) or too long (greater than a few hours) because the temporal changes in cloud cover characteristics are too minimal or too extensive to track locally. The corresponding clear-scene visible count change is
where BV is the clear-scene background visible count and where all other variables are as for Equation (1a).
Next, a determination is made as to whether ΔV is representative of “new” cloud. Visible cloud detection is performed by comparing the change in satellite-observed visible counts to the expected change in background. The ΔV pixel is cloudy if
where ΔVis is some positive-definite threshold ΔVis > 0. Equation (1c) is used to test for a satellite-observed scene brightness that increases faster than the background change by an amount in excess of the cloud threshold. This relation is applied independently with respect to time of day and the sign of ΔBV between times t − Δt and t (e.g., bidirectional effects not withstanding, ΔBV is positive in the morning, ΔBV = 0 near midday, and ΔBV is negative in the afternoon). Visible-channel cloud detection is not performed over highly reflective backgrounds such as snow, ice, desert, and sunglint.
Similarly, the TIR brightness temperature change is compared to the coterminous expected surface skin temperature change. The change in brightness temperature for a given pixel is defined as
where Tt − Δt is the TIR brightness temperature measured at times t−Δt and Tt at analysis time t. The corresponding clear-scene surface temperature change is
where BT denotes background skin temperature and where all other variables are as for Equation (2a).
Next, a determination is made as to whether ΔT [Equation (2a)] is representative of new cloud. The difference between the satellite-observed and coterminous cloud-free background temperature changes is computed and then compared to a threshold. The ΔT pixel is cloudy if
for some positive threshold ΔIR > 0. Analogous to Equation (1c), this relation is applied irrespective of the time of day (e.g., ΔBT positive, morning heating; ΔBT = 0, e.g., early afternoon; ΔBT negative, nighttime cooling).
Daytime pixels are flagged as cloudy only when both the visible and infrared tests [Equations (1) and (2), respectively] detect cloud. At night, data are subjected to the infrared tests [Equation (2)] only. Generally, the visible temporal-differencing test is not used without the infrared test. However, in cases where TIR data are not available (e.g., failure of TIR sensor), Equation (1) can be used in “stand alone” mode and, similarly, so can Equation (2). Additionally, visible data outside a tunable “x” solar-zenith-angle degrees of nadir (i.e., with less than “90 − x” degrees solar elevation) are excluded from analysis in order to avoid long shadows, enhanced multiple scattering through a longer sun-to-viewed-point atmospheric path, and lower sensor signal-to-noise ratio (SNR) prevalent in terminator geometries. Currently, θsun must be less than 85°.
To illustrate the temporal-differencing algorithm, refer to the example data arrays in Table 2. The top row of the table contains a 5 × 5 array of hypothetical TIR brightness temperatures, T, observed at the beginning and end of some time period, Δt, and their change, ΔT, over that time period. In the bottom row are the corresponding modeled surface temperatures, BT, and their change, ΔBT, over the same time period. Table 3 contains the Equation (2c) difference array ΔBT − ΔT corresponding to the values listed in Table 2. If a TIR cloud threshold ΔIR of 2 K is assumed, then the boldface numbers in Table 3 represent those pixels classified as cloudy by the TIR temporal-differencing algorithm; they satisfy Equation (2c).
3.1.1 Clear-scene visible counts
Visible temporal differencing requires a dynamic database of clear-scene visible counts maintained as a function of satellite and time of day [see Equation (1b)]. In the SERCAA application, one visible background field is maintained per daylight hour for each of the geostationary satellites in use currently: GMS, GOES, and Meteosat. Visible background brightness fields are required to predict the temporal changes that the satellite visible sensor would measure for clear-scene conditions. As such, the background fields are not required to provide an accurate estimate of clear-scene counts in an absolute sense but rather the relative changes as a function of time of day. Clear-scene visible reference background fields are generated directly from satellite measurements as a by-product of previous cloud analyses.
Changes in the satellite-observed visible background are caused predominantly by variations in bidirectional reflectance properties of the Earth's surface, and are due to changes in solar illumination geometry as a function of time of day (the view geometry is constant for geostationary satellites). Visible background data files are generated for each daylight hour by maintaining a record of the lowest visible count at each pixel location in collocated geostationary images accumulated over the most recent 10–14-day period. This procedure is generally sufficient to ensure that no cloudy pixels contaminate the clear-scene database. Daily updates are required to capture seasonal changes in surface reflectance characteristics caused by changing sunlight, temperature, moisture, and vegetation-canopy conditions.
Movie 1 contains a sequence of visible background fields calculated for the GOES-8 (also known as GOES-East) satellite for three different local times sampled throughout the day. Note the difference in background reflectance properties over the land (due to varying solar illumination geometry) and over water (due to sunglint). Since the visible background fields are cloud free, changes in bidirectional reflectance due to the daily cycle of varying view and illumination geometries (such as sunglint and vegetation “hot spots”) are automatically built into the cloud detection logic, as prescribed by Equation (1b).
3.1.2 Surface skin temperatures
Recall from Equation (2) that TIR brightness-temperature temporal differences are compared to modeled changes in clear-scene surface temperature for the same location. Surface temperature information is obtained from a one-dimensional, global land surface energy balance model known as SFCTMP, which is run operationally at the Air Force Weather Agency (AFWA; Kopp et al., 1994). SFCTMP routinely produces global analyses of skin and shelter temperatures, plus 3- and 4.5-h forecasts, once every 3 h. The SERCAA temporal-differencing algorithm uses the skin-temperature product since it is a parameter whose time rate of change closely approximates that of TIR satellite observations.
Similar to the visible background data, the SERCAA requirement on skin temperature analysis and forecast products is to capture expected clear-column brightness temperature changes throughout the day (as opposed to absolute accuracy for any given time). Movie 2 contains a sample sequence of four skin-temperature fields mapped to the same projection as in Movie 1.
Diurnal physical processes modeled by SFCTMP that cause differences between skin and shelter temperatures include solar geometry effects such as the length of time and angle of exposure of a surface to the sun. Factors not explicitly modeled are surface emissivity, atmospheric transmission, and sensor calibration, all of which can affect satellite-measured TIR brightness temperatures.
However, since SFCTMP forecast products are being used in a relative, time-differencing sense over short timescales, the modeled change in skin temperature is a good approximation to corresponding clear-scene brightness-temperature differences. This is a highly desirable attribute since the need for accurate specification of TIR atmospheric attenuation effects is obviated, especially since such effects are difficult to diagnose or model accurately on global scales and in real time. This attribute also uniquely distinguishes the temporal-differencing algorithm from single-channel TIR threshold detection techniques such as the RTNEPH (Hamill et al., 1992) and International Satellite Cloud Climatology Project (ISCCP; Schiffer and Rossow, 1983) cloud analysis models. The cloud-detection accuracy of these models is directly dependent on the absolute accuracy of the background temperature specification.
3.2 Dynamic thresholding
To this point in the processing sequence, all pixels in an analysis region that have passed the temporal-differencing tests are considered cloudy. However, this is generally a small percentage of all the cloud-filled pixels. The next algorithm step is to expand the cloud-detection analysis to nearby unclassified pixels.
Local analysis regions are defined for a rectangular nested grid that is overlaid on the satellite image. Typically analysis boxes are 64 × 64 or 128 × 128 GOES infrared pixels on a side, a number determined both by typical cloud characteristic length scales and the computational requirements of the real-time model. Local-scale visible and TIR cloud thresholds are dynamically defined for these boxes using the radiative attributes of the temporal-difference cloudy pixels within each box.
Pixels classified as cloudy by the temporal-differencing test(s) are used to generate separate sets of visible counts and TIR brightness temperatures for each analysis box. The minimum and maximum values of each set are then used to compute separate TIR brightness-temperature (Tcld) and visible-count (Vcld) cloud–no-cloud thresholds using
where the constants γ and δ are tunable parameters and where Tmax, Tmin, Vmax, and Vmin are the maximum and minimum values from the TIR and visible sets, respectively. A pixel is classified as cloudy if V > Vcld and/or T < Tcld.
An independent set of TIR and visible cloud–no-cloud thresholds (Tcld and Vcld, respectively) is calculated for each analysis box. The dynamic cloud thresholds are computed locally, adjusting automatically to the local-scale cloud, satellite view, and solar illumination conditions. Pixels are thus locally classified as cloudy if either the TIR brightness temperature is less than Tcld or the visible count is greater than Vcld.
Using the example in Table 3, TIR brightness temperatures for all pixels that have been classified as cloudy by the temporal difference algorithm are highlighted in boldface. Using a grid size of 5 × 5, the TIR temporal-differencing set has the following members:
with minimum and maximum values of Tmin = 216 K and Tmax = 251 K, respectively. Using Equation (3) with γ = 0.3, the TIR cloud threshold is
All remaining unclassified pixels in the 5 × 5 grid cell with TIR brightness temperatures lower than this value are classified as cloudy. In Table 3, these pixels are set in italics. Remaining pixels that are not boldface or italic remain unclassified at this stage of processing. At this stage it is also worth reminding that Tcld is not a “hardwired” threshold but rather one whose method of determination is designed to adapt automatically to local cloud conditions.
During initial development tests using GMS, Meteosat, and GOES satellites, the dynamic-threshold parameters were set to γ = 0.1 and δ = 0.1. However, these parameters remain regionally tunable as a function of time of day, satellite type, and geographic location. Intuitively, the constants γ and δ are metrics of the temporal-differencing cloud detection accuracy. The accuracy of the dynamic-threshold magnitudes depends only on the assumption that all data values that are used to generate the thresholds are from cloudy pixels. If temporal differencing were perfectly accurate, γ would be set to zero. As accuracy decreases, the magnitude of γ increases.
3.3 Static spectral tests
Following the temporal differencing and dynamic-threshold processing, it is still possible that clouds with spatial and spectral attributes that have not changed over the analysis time interval remain undetected. The final step in the geostationary algorithm processing sequence applies separate visible and TIR threshold algorithms to the remaining unclassified pixels. In a temporal-differencing sense, these cloud spectral tests are considered “static” in that they only analyze data valid at a single time.
To detect “obvious” clouds that are either highly reflective or that have low temperatures, the following cloud threshold tests are applied:
where all variables on the left side are as defined for Equation (1) and Equation (2), and where ΩVis and ΩIR are empirically defined visible and TIR cloud thresholds. Equation (5a) and Equation (5b) test for clouds that are significantly brighter and/or colder than the expected clear-scene background.
The thresholds ΩVis and ΩIR are conservatively set to large values; in other words, if a classification mistake is made, it is most likely that cloud will be labeled clear. There are three reasons for this. First, the temporal-differencing background databases are not designed to provide sufficient accuracy to support small cloud thresholds ΩVis and ΩIR. This is because the visible and TIR background fields are intended only to predict relative changes in satellite-observed clear-scene counts during a time interval, as opposed to absolute magnitudes for a specific (fixed) time.
Second, in the context of the overall SERCAA analysis procedure, the role of the geostationary algorithm is focused on detecting rapidly changing cloud systems. Recall from section 1 that in addition to the geostationary analysis, multispectral data from polar-orbiting TIROS and U.S. GOES geostationary satellites are also processed and then integrated with temporal-differencing results. The multispectral imager and AVHRR sensors on board GOES and TIROS satellites provide a good capability for identifying static or persistent cloud features, making it less critical that the static bispectral geostationary tests detect them. DMSP satellites offer a similar but somewhat less reliable “bispectral” capability at a relatively high spatial resolution. Although the polar satellite refresh times are longer at low latitudes relative to the geostationary satellites, it is reasonable to persist analyses of “steady” clouds for the periods between polar passes (e.g., marine stratus or nighttime fog) since they change relatively slowly.
Third, large threshold values help ensure that clear pixels are not classified as cloudy by these static tests. The overall design paradigm is (a) if the clouds' radiative attributes are changing in space and/or time, temporal-differencing tests will detect them; and (b) if they are not, then the multispectral or daytime bispectral static algorithms will. The integrated hourly SERCAA analysis then combines the individual satellite products. This cloud-detection strategy is designed to exploit the strengths of each satellite sensor, taking into account its temporal, radiometric, spectral, and spatial resolutions.
3.4 Cloud analysis integration
Three independent gridded cloud analyses have been produced in real time: one derived from DMSP Operational Linescan System (OLS), a second from TIROS AVHRR, and the third from the global constellation of geostationary sensors. Analyses are then mapped to the model grid, nominally 12 km. Each grid point will likely have a different time key based on the source of the input satellite data. To generate a global merged analysis, cloud data from each of the three analysis grids as well as from all available surface-based cloud observations are fed into an analysis integration algorithm to produce a single hourly optimal analysis based on the relative timeliness and accuracy of each data source. Although a full description is outside the scope of this paper, the general conceptual approach to the integration problem is one that utilizes both rules-based concepts and principles from statistical objective analysis (e.g., see Lorenc, 1981; Hamill and Hoffman, 1993).
4. Sample results
Movie 3 contains GOES-8 4-km visible, TIR, and color-composite imagery over the southeastern United States, Mexico, and Central America. These images show low cumuliform clouds in the southern Mississippi River valley region, and widespread cirrus, cirrostratus, and cumulonimbus clouds over Florida, the Gulf of Mexico, and much of Central America. There is also some optically thin cirrus over western Texas and New Mexico that is difficult to detect in the visible image, but it is more apparent in the infrared and color composite. Figure 1 contains the corresponding 2100 UTC visible background field derived from the previous 14 days of visible imagery.
Movie 4 shows results of the temporal-differencing test obtained by applying Equation (1) and Equation (2) to the 2000 and 2100 UTC visible images, with cloudy pixels displayed as red. Note in the northern part of the image that the easternmost cirrostratus and cumulonimbus cloud edges have been flagged by the temporal-differencing test as cloud. In contrast, in the southern part of the image over the Pacific Ocean it is the western edges that have been flagged. This is consistent with the fact that at midlatitudes the winds aloft are predominantly westerly, while closer to the equator they are easterly.
Note too that the thin cirrus clouds over Texas and New Mexico have been detected in the temporal-differencing step, but that only a small fraction of the cumuliform clouds in the Mississippi delta region were flagged. These thin cirrus clouds are moving quickly, causing fast changes in pixel visible brightnesses and TIR temperatures. These quick changes in turn trigger the temporal-differencing technique to readily identify them as cloud. Conversely, the radiative attributes of the more stationary cumuliform clouds have remained relatively unchanged over the analysis period, leaving them undetected thus far in the processing sequence.
Movie 5 shows the results of second processing step: the visible and TIR dynamic-threshold tests. Pixels with TIR brightness temperatures less than the infrared threshold defined by Equation (3), or with visible counts greater than the visible threshold as defined by Equation (4), are colored orange (the red pixels have been carried over from Movie 4 ). In general the visible dynamic-threshold algorithm detected the brightest, most optically thick clouds while the infrared test identified most of the cold, optically thin cirrus. To this point in the cloud detection processing, all of the temporal information content of the 2000–2100 UTC satellite image pair has been exploited. As can be seen in Movie 5, most of the clouds have been identified, but some still remain undetected.
Pixels that have remained unclassified through the first two processing steps are now subjected to the static spectral tests. Movie 6 shows the results of the visible and TIR single-channel cloud tests [Equation (5)] in yellow. Note that the stationary cumuliform clouds in Mississippi, Louisiana, and throughout Mexico and Central America are detected. The success of these tests is due mainly to the strong contrast between the cloud visible signatures and the underlying clear-scene visible-counts field displayed in Figure 1.
By comparing the colored areas in Movie 6 with the original imagery in Movie 3 it can be seen that the combined algorithm results provide an accurate discrimination between clear and cloudy areas over the range of cloud and cloud-environment conditions within the image scene.
5. GOES imager multispectral cloud tests
All geostationary satellites used during SERCAA development (GOES, Meteosat, and GMS) have imaging sensors with a common visible and thermal infrared channel (refer to Table 1). As such all of the time-differencing cloud tests discussed thus far can be equally applied to data collected by any of them. The GOES-Next series of satellites introduces additional imager channels over Meteosat and GMS that have potential for significantly improving stationary low-cloud detection capabilities, especially at night. Most notably a middle-wavelength infrared (MWIR) channel between 3.8 and 4.0 μm along with split-window TIR channels at 10.2–11.2 and 11.5–12.5 μm will be available (Menzel and Purdom, 1994). Previous GOES satellites used the Visible/Infrared Spin Scan Radiometer (VISSR) Atmospheric Sounder (VAS) in a multispectral imaging mode to provide data at these wavelengths, but on an alternating schedule. GOES-8–10 provide all channels simultaneously on a full-time basis and at improved spatial resolution. The European Meteosat Second Generation (MSG) satellite and the Japanese Multifunctional Transport Satellite (MTSAT) will fly these and additional channels as well.
The SERCAA geostationary algorithm is flexible enough to exploit these channels through the addition of multispectral cloud tests designed to identify spectral signatures of specific cloud classes. This type of approach has been used successfully by a number of meteorologists for classifying cloud using AVHRR multispectral data (e.g., Saunders and Kriebel, 1988; Gustafson and d'Entremont, 1992; Derrien et al., 1993; Ackerman et al., 1998).
- Nighttime low clouds, fog, and stratus. MWIR emissivities for water droplet clouds are lower than TIR emissivities, which in turn result in lower nighttime MWIR brightness temperatures compared to TIR brightness temperatures. Nighttime low clouds and fog are detectable using the following test:
- Daytime low clouds, fog, and stratus. During daytime, upwelling MWIR radiances from low, water droplet clouds have contributions from both reflected solar and emitted thermal energy, while the TIR radiances are essentially purely thermal (and blackbody) in nature. Consequently, for stratus and fog, daytime MWIR brightness temperatures are higher than their corresponding TIR temperatures. During daytime, low clouds and fog are detected using
- Daytime cumulonimbus. Cumulonimbus and optically thick cirrostratus clouds also reflect 3.9-μm solar energy efficiently and hence can test positive for cloud using Equation (7a). In fact, the MWIR–TIR brightness temperature difference is often larger for ice particle clouds, but not unambiguously so. However, cirrostratus and cumulonimbus clouds have temperatures that are much lower than those of the underlying surface, and their visible brightness is very high. Thus a set of tests is used to discriminate optically thick ice particle clouds: Equation (7); CB is the daytime MWIR–TIR channel-difference threshold (K); CBIR is a high-cloud threshold (K) that ensures the cloud has a brightness temperature significantly less than the underlying surface; Vt is the visible count measured at image time t; θ is the scene solar zenith angle; and CBVIS is a visible brightness threshold. Equation (8b) checks that the measured cloud is cold, and Equation (8c) ensures it is very bright. A pixel tests positively for cumulonimbus or cirrostratus whenever Equation (8a), Equation (8b), and Equation (8c) pass simultaneously.
- Nighttime thin cirrus. Satellite-observed radiance from thin cirrus has contributions from both the relatively warm underlying surface and the cold cloud. The Planck function is more strongly dependent on temperature at MWIR wavelengths, resulting in radiances with higher proportions of surface-emitted energy and therefore higher brightness temperatures when the field of view has a mixture of ground and (transparent) cirrus radiances. Additionally, cirrus transmissivities are higher at 3.9 μm than at 10.7 μm. Both effects combine to ensure that cirrus MWIR brightness temperatures are higher than the corresponding TIR temperatures. The multispectral thin-cirrus test at night is Equation (7a) and CI is the nighttime MWIR–TIR thin cirrus threshold (K). Reflected incident solar energy makes it problematic to develop an accurate MWIR–TIR thin cirrus test for daytime illumination conditions.
On a final note, it is important to mention that the threshold factors LCx, CBx, and CI in Equation (6), Equation (7), Equation (8), and Equation (9) are tunable in the operational model, each resolvable on local box scales as small as 24 km on a side, and as a function of satellite (e.g., GOES-10, NOAA-16, DMSP F-12), background geography (e.g., land, coast, water), and time of day (in 2-hourly increments for a total of 12 time intervals per day). This helps especially over deserts where strong temperature gradients are a natural part of the diurnal heating and cooling cycle.
6. Algorithm limitations
Since becoming a part of the new Cloud Depiction and Forecast System (CDFS) models at the Air Force Weather Agency (Neu et al., 1994), the geostationary temporal-differencing algorithm has demonstrated skill to its operational users. Nonetheless, experience gained during formal testing of CDFS in the spring of 2002 has helped modelers to better understand its limitations as well. In operations, the temporal-differencing model is run in real time and its pixel-level results are displayed as toggle overlays on top of the original satellite image data. These displays are inspected interactively, and in real time as well, by trained analysts who can objectively evaluate the accuracy of the cloud mask given the original satellite imagery from which it was generated. In regions where cloud-detection performance is suboptimal, algorithm thresholds [such as those in Equation (1c), Equation (2c), Equation (3), Equation (4), Equation (5), Equation (6), Equation (7), Equation (8), and Equation (9)] are digitally tuned for local conditions. Adjustments to these thresholds can be performed in real time on regional and localized spatial scales as small as 24 km, and for up to 12 different time periods per day, for each individual satellite. Seasonal variations are handled in real time as well. With constant feedback between users and modelers, this is how the quality of the CDFS cloud products is maintained.
There are many reasons why real-time, global cloud models require operational tuning. Spatially and temporally varying atmospheric and Earth surface conditions do not lend themselves to a single, static set of thresholds for the entire globe. Stressing environments include (a) clouds over bright backgrounds such as desert, snow, and ice; (b) low-level clouds in winter and arctic nighttime that have formed in a stable surface inversion layer, thereby appearing warmer than the underlying background; (c) confusing coastlines for clouds; and (d) pixel growth due to the Earth surface's curving away from the satellite as it scans poleward.
Experience to date indicates that temporal differencing often confuses clear conditions with “new” clouds in places where there are large gradients in background brightness and/or skin temperature that are representative of the natural background itself. Examples include the Gulf Stream's north wall (infrared), the southern boundary of the Sahara Desert (visible), and snow–no-snow boundaries (visible). However, incorrect cloud detection is minimized when the satellite data are accurately geolocated in conjunction with the availability of a high-resolution geography-type (e.g., land, coast, water, desert, snow) background database. Knowing with accuracy the land cover background type enables the algorithm thresholds to be selectively tuned on very fine spatial scales. Furthermore, this particular problem is so far restricted largely to nighttime conditions when visible data cannot be provided.
As the geostationary sensors scan higher latitudes and at geocentric angles increasingly distant from the satellite subpoint, longer atmospheric pathlengths coupled with larger pixel sizes result in a smaller contrast between clouds and the Earth's surface, generally speaking. Under such conditions geostationary temporal differencing tends to underanalyze clouds, especially those with small spatial scales that are not resolvable by the larger pixel fields of view (FOVs). This problem is symptomatic and has caused modelers to abandon the use of geostationary data whenever a pixel falls more than 50 geocentric degrees from the satellite subpoint. Figure 2 contains a plot of CDFS global geostationary coverage in the Western Hemisphere. Assuming a constant instantaneous field of view, pixel sizes approach five times their nadir size at geocentric angles of 50°. Outside the circles, reliance is placed solely on polar-orbiter data. At most high latitudes, polar coverage is updated as frequently as hourly because the TIROS and DMSP orbital poles lie at 81°N and 81°S.
Ideally there are several options available for evaluating model accuracies, including 1) comparisons with surface-based observations of cloud and/or 2) comparisons between the digital cloud-detection output and a cloud mask that is manually generated (using interactive computer applications) by a trained analyst, each making use of the same input satellite data. However, there are many reasons why option 1 is insufficient in practice, not withstanding the question of which cloud report is “correct,” surface based or model retrieved.
First, satellites view clouds from a vastly different perspectives and illumination geometry than do surface observers. Under multilayer cloud conditions, satellites are more likely to “see” a cirrus shield that obscures the cloud deck below, while at the surface the cirrus are hidden from view by the low clouds. Satellite sensors detect reflected solar energy while surface observers view both reflected and transmitted energy. Similarly, consider the viewing of boundary layer maritime stratus and cumulus in the Tropics. Surface observers can easily see these clouds. However, if the maritime atmosphere is laden with enough water vapor, these same low-level clouds are effectively blocked from the space-based view of infrared sensors because strong water vapor absorption prevents the clouds' upwelling thermal energy from penetrating the entire atmospheric column.
Second, surface observers make subjective estimates of cloud cover and cloud height, while satellite reports are objective. Third, surface observations of cloud are restricted to near-zenith views, while satellite observations span the spectrum from nadir to scan limb. This influences strongly any estimates of cloud fraction. For example, Snow et al. (Snow et al., 1985) demonstrate that space-based estimates of cloud cover for the same cloud field can vary an average of 25% (and by as much as 40%) between nadir and the edge of scan. The extent to which these variations exist is a function of cloud aspect ratio and the true fractional cloud cover.
For all these reasons a truly objective and quantitative global validation of the CDFS model remains elusive. None of this is to say that objective validations with surface-based cloud observations have no value, but rather that such comparisons offer limited insight into model performance accuracies when the magnitude of the entirety of the global cloud monitoring challenge is considered. Assuming on average that each observer has a view of 625 km2 of “local” sky cover, the approximately 10,000 worldwide METAR (translated roughly from the French as aviation routine weather report) surface stations cover at best only 1% of the Earth's surface, and even then only at synoptic times. The U.S. Air Force has requirements for CDFS cloud products at both the individual satellite (real time) and integrated analysis levels (hourly). It is clear that surface reports (which some view as “truth”) cannot alone satisfy these requirements. It is also clear that their sparsity in space and time cannot support a truly global validation effort.
This is why CDFS model caretakers have adopted the real-time interactive “bogusing” paradigm described in the beginning of this section. Although a part of the bogusing process includes comparisons between coterminous surface-based and model-retrieved cloud reports, the overwhelming majority of comparisons are made with manually generated cloud masks over most of the globe. Satellites provide the only source of cloud observations on a truly global scale, and so it stands to reason that these data will be used constantly in the “verification” and maintenance of the CDFS cloud model. Forecaster-generated bogusing operations are tagged with date, time, and geographic location information as well as with the satellite whose data were used to generate the poor automated analysis. As bogusing statistics become available later this year, it will be important to document the strengths and shortcomings of the new cloud model and to remain vigilant in ensuring the quality of its products.
Under a research and development program known as SERCAA, a series of cloud algorithms have been developed that process sensor data from both polar-orbiting and geostationary environmental satellites to produce an integrated, global cloud analysis product in real time. As of 1 April 2002, SERCAA algorithms have been running alongside the current USAF RTNEPH operational cloud analysis program, and they are scheduled to replace RTNEPH during the summer of 2002.
As a part of SERCAA, a temporal-differencing algorithm has been developed that exploits the high refresh rates of geostationary satellites to automatically detect and analyze clouds based on their temporal signature in visible and/or infrared sensor data. The algorithm is designed to complement cloud information obtained from polar-orbiting satellite passes and, as such, is most sensitive to developing or moving cloud systems. Dynamically assigned local thresholds are used to identify persistent or stationary clouds that may be present in the vicinity of moving and/or developing clouds. The cloud detection algorithm adapts automatically to local cloud and cloud environment characteristics with minimal need for user interaction and is applicable to imaging sensor data from any geostationary environmental satellite.
The temporal-differencing approach is different from other cloud analysis algorithms, such as RTNEPH and ISCCP, in that retrieval accuracy is minimally dependent on accurate specification of clear-scene background radiative characteristics. Estimates of background skin temperature and visible brightness are used only in a time-relative sense to predict expected changes in the clear-scene background attributes over short time periods. This obviates the need for highly accurate background fields that are precise for a given time. A numerical land surface energy balance model is used to analyze and predict surface skin temperature fields based on input from an NWP model and surface reporting stations. Clear-scene visible reference background fields are generated directly from satellite measurements as a by-product of previous cloud analyses. Since they are based on satellite measurements, diurnal effects and variations in bidirectional reflectance of the land surface are implicitly accounted for.
The temporal-differencing cloud-detection algorithm was developed with a one- or two-channel processing paradigm in mind, in support of sensor channels that are routinely available on all multinational geostationary satellites. However, additional cloud tests have been added to exploit the multispectral data available from GOES satellites. The addition of full-time MWIR and split longwave TIR channels to the GOES imagers significantly enhances the temporal-differencing algorithm's capability to identify certain types of clouds whose detection has traditionally been problematic using two-channel techniques. In particular, detection of persistent low water droplet clouds and fog at night as well as optically thin ice crystal cirrus is improved in the operational cloud analysis product. This in turn enhances the ability of temporal-differencing algorithms to serve as a robust cloud detection and analysis program.
Grateful thanks are extended to Joseph W. Snow and James T. Bunting for their thoughtful contributions to the temporal-differencing algorithm development effort, to Joyce Grace for her help with HTML programming, and to Daniel C. Peduzzi and Brian T. Pearson for writing the geostationary temporal-differencing software for our Silicon Graphics and DEC VMS computers. Thanks are also offered to the three reviewers who provided thoughtful suggestions for improving our paper. This work was supported jointly by the Department of Defense, Department of Energy, and the Environmental Protection Agency through the Strategic Environmental Research and Development Program (SERDP) under Contract F19628-92-C-0149.
Corresponding author address: Dr. Robert P. d'Entremont, Atmospheric and Environmental Research, Inc., 131 Hartwell Ave., Lexington, MA 02421-3126. firstname.lastname@example.org