Browse
Abstract
Geostationary observations provide measurements of the cloud liquid water path (LWP), permitting continuous observation of cloud evolution throughout the daylit portion of the diurnal cycle. Relative to LWP derived from microwave imagery, these observations have biases related to scattering geometry, which systematically varies throughout the day. Therefore, we have developed a set of bias corrections using microwave LWP for the Geostationary Operational Environmental Satellite-16 and -17 (GOES-16 and GOES-17) observations of LWP derived from retrieved cloud-optical properties. The bias corrections are defined based on scattering geometry (solar zenith, sensor zenith, and relative azimuth) and low cloud fraction. We demonstrate that over the low-cloud regions of the northeast and southeast Pacific, these bias corrections drastically improve the characteristics of the retrieved LWP, including its regional distribution, diurnal variation, and evolution along short-time-scale Lagrangian trajectories.
Significance Statement
Large uncertainty exists in cloud liquid water path derived from geostationary observations, which is caused by changes in the scattering geometry of sunlight throughout the day. This complicates the usefulness of geostationary satellites to analyze the time evolution of clouds using geostationary data. Therefore, microwave imagery observations of liquid water path, which do not depend on scattering geometry, are used to create a set of corrections for geostationary data that can be used in future studies to analyze the time evolution of clouds from space.
Abstract
Geostationary observations provide measurements of the cloud liquid water path (LWP), permitting continuous observation of cloud evolution throughout the daylit portion of the diurnal cycle. Relative to LWP derived from microwave imagery, these observations have biases related to scattering geometry, which systematically varies throughout the day. Therefore, we have developed a set of bias corrections using microwave LWP for the Geostationary Operational Environmental Satellite-16 and -17 (GOES-16 and GOES-17) observations of LWP derived from retrieved cloud-optical properties. The bias corrections are defined based on scattering geometry (solar zenith, sensor zenith, and relative azimuth) and low cloud fraction. We demonstrate that over the low-cloud regions of the northeast and southeast Pacific, these bias corrections drastically improve the characteristics of the retrieved LWP, including its regional distribution, diurnal variation, and evolution along short-time-scale Lagrangian trajectories.
Significance Statement
Large uncertainty exists in cloud liquid water path derived from geostationary observations, which is caused by changes in the scattering geometry of sunlight throughout the day. This complicates the usefulness of geostationary satellites to analyze the time evolution of clouds using geostationary data. Therefore, microwave imagery observations of liquid water path, which do not depend on scattering geometry, are used to create a set of corrections for geostationary data that can be used in future studies to analyze the time evolution of clouds from space.
Abstract
Optical lightning observations from low-Earth orbit play an important role in our understanding of long-term global lightning trends. Lightning Imaging Sensors (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite (1997–2015) and International Space Station (2017–present) capture optical emissions produced by lightning. This study uses the well-documented TRMM LIS performance to determine if the ISS LIS performs well enough to bridge the gap between TRMM LIS and the new generation of Geostationary Lightning Mappers (GLMs). The average events per group and groups per flash for ISS LIS are 3.6 and 9.9, which are 18% and 10% lower than TRMM LIS, respectively. ISS LIS has 30% lower mean group energy density and 30%–50% lower mean flash energy density than TRMM LIS in their common (±38°) latitude range. These differences are likely the result of larger pixel areas for ISS LIS over most of the field of view due to off-nadir pointing, combined with viewing obstructions and possible engineering differences. For both instruments, radiometric sensitivity decreases radially from the center of the array to the edges. ISS LIS sensitivity falls off faster and more variably, contributed to by the off-nadir pointing. Event energy density analysis indicate some anomalous hotspot pixels in the ISS LIS pixel array that were not present with the TRMM LIS. Despite these differences, ISS LIS provides similar parameter values to TRMM LIS with the expectation of somewhat lower lightning detection capability. In addition, recalculation of the event, group, and flash areas for both LIS datasets are strongly recommended since the archived values in the current release versions have significant errors.
Abstract
Optical lightning observations from low-Earth orbit play an important role in our understanding of long-term global lightning trends. Lightning Imaging Sensors (LIS) on the Tropical Rainfall Measuring Mission (TRMM) satellite (1997–2015) and International Space Station (2017–present) capture optical emissions produced by lightning. This study uses the well-documented TRMM LIS performance to determine if the ISS LIS performs well enough to bridge the gap between TRMM LIS and the new generation of Geostationary Lightning Mappers (GLMs). The average events per group and groups per flash for ISS LIS are 3.6 and 9.9, which are 18% and 10% lower than TRMM LIS, respectively. ISS LIS has 30% lower mean group energy density and 30%–50% lower mean flash energy density than TRMM LIS in their common (±38°) latitude range. These differences are likely the result of larger pixel areas for ISS LIS over most of the field of view due to off-nadir pointing, combined with viewing obstructions and possible engineering differences. For both instruments, radiometric sensitivity decreases radially from the center of the array to the edges. ISS LIS sensitivity falls off faster and more variably, contributed to by the off-nadir pointing. Event energy density analysis indicate some anomalous hotspot pixels in the ISS LIS pixel array that were not present with the TRMM LIS. Despite these differences, ISS LIS provides similar parameter values to TRMM LIS with the expectation of somewhat lower lightning detection capability. In addition, recalculation of the event, group, and flash areas for both LIS datasets are strongly recommended since the archived values in the current release versions have significant errors.
Abstract
A retrospective tropical Indian Ocean dipole mode (IOD) hindcast for 1958–2014 was conducted using 20 models from the sixth phase of the Coupled Model Intercomparison Project (CMIP6), with a model-based analog forecast (MAF) method. In the MAF approach, forecast ensembles are extracted from preexisting model simulations by finding the states that initially best match an observed anomaly and tracking their subsequent evolution, with no additional model integrations. By optimizing the key factors in the MAF method, we suggest that the optimal domain for the analog criteria should be concentrated in the tropical Indian Ocean region for IOD predictions. Including external forcing trends improves the skills of the east and west poles of the IOD, but not the IOD prediction itself. The MAF IOD prediction showed comparable skills to the assimilation-initialized hindcast, with skillful predictions corresponding to a 4- and 3-month lead, respectively. The IOD forecast skill had significant decadal variations during the 55-yr period, with low skill after the early 2000s and before 1985 and high skill during 1985–2000. This work offers a computational efficient and practical approach for seasonal prediction of the tropical Indian Ocean sea surface temperature.
Abstract
A retrospective tropical Indian Ocean dipole mode (IOD) hindcast for 1958–2014 was conducted using 20 models from the sixth phase of the Coupled Model Intercomparison Project (CMIP6), with a model-based analog forecast (MAF) method. In the MAF approach, forecast ensembles are extracted from preexisting model simulations by finding the states that initially best match an observed anomaly and tracking their subsequent evolution, with no additional model integrations. By optimizing the key factors in the MAF method, we suggest that the optimal domain for the analog criteria should be concentrated in the tropical Indian Ocean region for IOD predictions. Including external forcing trends improves the skills of the east and west poles of the IOD, but not the IOD prediction itself. The MAF IOD prediction showed comparable skills to the assimilation-initialized hindcast, with skillful predictions corresponding to a 4- and 3-month lead, respectively. The IOD forecast skill had significant decadal variations during the 55-yr period, with low skill after the early 2000s and before 1985 and high skill during 1985–2000. This work offers a computational efficient and practical approach for seasonal prediction of the tropical Indian Ocean sea surface temperature.
Abstract
The 2010–12 Acoustic Technology for Observing the Interior of the Arctic Ocean (ACOBAR) experiment provided acoustic tomography data along three 167–301-km-long sections in Fram Strait between Greenland and Spitsbergen. Ocean sound speed data were assimilated into a regional numerical ocean model using the Massachusetts Institute of Technology General Circulation Model–Estimating the Circulation and Climate of the Ocean four-dimensional variational (MITgcm-ECCO 4DVAR) assimilation system. The resulting state estimate matched the assimilated sound speed time series; the root-mean-squared (RMS) error of the sound speed estimate (∼0.4 m s−1) is smaller than the uncertainty of the measurement (∼0.8 m s−1). Data assimilation improved modeled range- and depth-averaged ocean temperatures at the 78°50′N oceanographic mooring section in Fram Strait. The RMS error of the state estimate (0.21°C) is comparable to the uncertainty of the interpolated mooring section (0.23°C). Lack of depth information in the assimilated ocean sound speed measurements caused an increased temperature bias in the upper ocean (0–500 m). The correlations with the mooring section were not improved as short-term variations in the mooring measurements and the ocean state estimate do not always coincide in time. This is likely due to the small-scale eddying and nonlinearity of the ocean circulation in Fram Strait. Furthermore, the horizontal resolution of the state estimate (4.5 km) is eddy permitting, rather than eddy resolving. Thus, the state estimate cannot represent the full ocean dynamics of the region. This study is the first to demonstrate the usefulness of large-scale acoustic measurements for improving ocean state estimates at high latitudes.
Significance Statement
Acoustic tomography measurements allow one to observe ocean temperature in large ocean volumes under the Arctic sea ice by measuring sound speed, which is hard to synoptically observe by other methods. This study has established methods for assimilation of depth- and range-averaged ocean sound speed from an acoustic tomography experiment in Fram Strait. For the first time, a 2-yr time series of ocean sound from acoustic tomography has been assimilated into an ocean state estimate. The results highlight the use of ocean tomography in ice-covered regions to improve state estimates of ocean temperature.
Abstract
The 2010–12 Acoustic Technology for Observing the Interior of the Arctic Ocean (ACOBAR) experiment provided acoustic tomography data along three 167–301-km-long sections in Fram Strait between Greenland and Spitsbergen. Ocean sound speed data were assimilated into a regional numerical ocean model using the Massachusetts Institute of Technology General Circulation Model–Estimating the Circulation and Climate of the Ocean four-dimensional variational (MITgcm-ECCO 4DVAR) assimilation system. The resulting state estimate matched the assimilated sound speed time series; the root-mean-squared (RMS) error of the sound speed estimate (∼0.4 m s−1) is smaller than the uncertainty of the measurement (∼0.8 m s−1). Data assimilation improved modeled range- and depth-averaged ocean temperatures at the 78°50′N oceanographic mooring section in Fram Strait. The RMS error of the state estimate (0.21°C) is comparable to the uncertainty of the interpolated mooring section (0.23°C). Lack of depth information in the assimilated ocean sound speed measurements caused an increased temperature bias in the upper ocean (0–500 m). The correlations with the mooring section were not improved as short-term variations in the mooring measurements and the ocean state estimate do not always coincide in time. This is likely due to the small-scale eddying and nonlinearity of the ocean circulation in Fram Strait. Furthermore, the horizontal resolution of the state estimate (4.5 km) is eddy permitting, rather than eddy resolving. Thus, the state estimate cannot represent the full ocean dynamics of the region. This study is the first to demonstrate the usefulness of large-scale acoustic measurements for improving ocean state estimates at high latitudes.
Significance Statement
Acoustic tomography measurements allow one to observe ocean temperature in large ocean volumes under the Arctic sea ice by measuring sound speed, which is hard to synoptically observe by other methods. This study has established methods for assimilation of depth- and range-averaged ocean sound speed from an acoustic tomography experiment in Fram Strait. For the first time, a 2-yr time series of ocean sound from acoustic tomography has been assimilated into an ocean state estimate. The results highlight the use of ocean tomography in ice-covered regions to improve state estimates of ocean temperature.
Abstract
In 2013 and 2014, multiple field excursions of varying scope were concentrated on the Columbia River, a highly energetic, partially mixed estuary. These experiments included surface drifter and synthetic aperture radar (SAR) measurements during the ONR RIVET-II experiment, and a novel animal tracking effort that samples oceanographic data by employing cormorants tagged with biologging devices. In the present work, several different data types from these experiments were combined in order to test an iterative, ensemble-based inversion methodology at the mouth of the Columbia River (MCR). Results show that, despite inherent limitations of observation and model accuracy, it is possible to detect dynamically relevant bathymetric features such as large shoals and channels while originating from a linear, featureless prior bathymetry in a partially mixed estuary by inverting surface current and gravity wave observations with a 3D hydrostatic ocean model. Bathymetry estimation skill depends on two factors: location (i.e., differing estimation quality inside versus outside the MCR) and observation type (e.g., surface currents versus significant wave height). Despite not being inverted directly, temperature and salinity outputs in the hydrodynamic model improved agreement with observations after bathymetry inversion.
Abstract
In 2013 and 2014, multiple field excursions of varying scope were concentrated on the Columbia River, a highly energetic, partially mixed estuary. These experiments included surface drifter and synthetic aperture radar (SAR) measurements during the ONR RIVET-II experiment, and a novel animal tracking effort that samples oceanographic data by employing cormorants tagged with biologging devices. In the present work, several different data types from these experiments were combined in order to test an iterative, ensemble-based inversion methodology at the mouth of the Columbia River (MCR). Results show that, despite inherent limitations of observation and model accuracy, it is possible to detect dynamically relevant bathymetric features such as large shoals and channels while originating from a linear, featureless prior bathymetry in a partially mixed estuary by inverting surface current and gravity wave observations with a 3D hydrostatic ocean model. Bathymetry estimation skill depends on two factors: location (i.e., differing estimation quality inside versus outside the MCR) and observation type (e.g., surface currents versus significant wave height). Despite not being inverted directly, temperature and salinity outputs in the hydrodynamic model improved agreement with observations after bathymetry inversion.
Abstract
Downbursts are rapidly evolving meteorological phenomena with numerous vertically oriented precursor signatures, and the temporal resolution and vertical sampling of the current NEXRAD system are too coarse to observe their evolution and precursor signatures properly. A future all-digital polarimetric phased-array weather radar (PAR) should be able to improve both temporal resolution and spatial sampling of the atmosphere to provide better observations of rapidly evolving hazards such as downbursts. Previous work has been focused on understanding the trade-offs associated with using various scanning techniques on stationary PARs; however, a rotating, polarimetric PAR (RPAR) is a more feasible and cost-effective candidate. Thus, understanding the trade-offs associated with using various scanning techniques on an RPAR is vital in learning how to best observe downbursts with such a system. This work develops a framework for analyzing the trade-offs associated with different scanning strategies in the observation of downbursts and their precursor signatures. A proof-of-concept analysis—which uses a Cloud Model 1 (CM1)-simulated downburst-producing thunderstorm—is also performed with both conventional and imaging scanning strategies in an adaptive scanning framework to show the potential value and feasibility of the framework. Preliminary results from the proof-of-concept analysis indicate that there is indeed a limit to the benefits of imaging as an update time speedup method. As imaging is used to achieve larger speedup factors, corresponding data degradation begins to hinder the observations of various precursor signatures.
Abstract
Downbursts are rapidly evolving meteorological phenomena with numerous vertically oriented precursor signatures, and the temporal resolution and vertical sampling of the current NEXRAD system are too coarse to observe their evolution and precursor signatures properly. A future all-digital polarimetric phased-array weather radar (PAR) should be able to improve both temporal resolution and spatial sampling of the atmosphere to provide better observations of rapidly evolving hazards such as downbursts. Previous work has been focused on understanding the trade-offs associated with using various scanning techniques on stationary PARs; however, a rotating, polarimetric PAR (RPAR) is a more feasible and cost-effective candidate. Thus, understanding the trade-offs associated with using various scanning techniques on an RPAR is vital in learning how to best observe downbursts with such a system. This work develops a framework for analyzing the trade-offs associated with different scanning strategies in the observation of downbursts and their precursor signatures. A proof-of-concept analysis—which uses a Cloud Model 1 (CM1)-simulated downburst-producing thunderstorm—is also performed with both conventional and imaging scanning strategies in an adaptive scanning framework to show the potential value and feasibility of the framework. Preliminary results from the proof-of-concept analysis indicate that there is indeed a limit to the benefits of imaging as an update time speedup method. As imaging is used to achieve larger speedup factors, corresponding data degradation begins to hinder the observations of various precursor signatures.
Abstract
One long-standing technical problem affecting the accuracy of eddy correlation air–sea CO2 flux estimates has been motion contamination of the CO2 mixing-ratio measurement. This sensor-related problem is well known but its source remains unresolved. This report details an attempt to identify and reduce motion-induced error and to improve the infrared gas analyzer (IRGA) design. The key finding is that a large fraction of the motion sensitivity is associated with the detection approach common to most closed- and open-path IRGA employed today for CO2 and H2O measurements. A new prototype sensor was developed to both investigate and remedy the issue. Results in laboratory and deep-water tank tests show marked improvement. The prototype shows a factor of 4–10 reduction in CO2 error under typical at-sea buoy pitch and roll tilts in comparison with an off-the-shelf IRGA system. A similar noise reduction factor of 2–8 is observed in water vapor measurements. The range of platform tilt motion testing also helps to document motion-induced error characteristics of standard analyzers. Study implications are discussed including findings relevant to past field measurements and the promise for improved future flux measurements using similarly modified IRGA on moving ocean observing and aircraft platforms.
Abstract
One long-standing technical problem affecting the accuracy of eddy correlation air–sea CO2 flux estimates has been motion contamination of the CO2 mixing-ratio measurement. This sensor-related problem is well known but its source remains unresolved. This report details an attempt to identify and reduce motion-induced error and to improve the infrared gas analyzer (IRGA) design. The key finding is that a large fraction of the motion sensitivity is associated with the detection approach common to most closed- and open-path IRGA employed today for CO2 and H2O measurements. A new prototype sensor was developed to both investigate and remedy the issue. Results in laboratory and deep-water tank tests show marked improvement. The prototype shows a factor of 4–10 reduction in CO2 error under typical at-sea buoy pitch and roll tilts in comparison with an off-the-shelf IRGA system. A similar noise reduction factor of 2–8 is observed in water vapor measurements. The range of platform tilt motion testing also helps to document motion-induced error characteristics of standard analyzers. Study implications are discussed including findings relevant to past field measurements and the promise for improved future flux measurements using similarly modified IRGA on moving ocean observing and aircraft platforms.
Abstract
We present nowcasts of sudden heavy rains on meso-γ scales (2–20 km) using the high spatiotemporal resolution of a multiparameter phased-array weather radar (MP-PAWR) sensitive to rain droplets. The onset of typical storms is successfully predicted with 10-min lead time, i.e., the current predictability limit of rainfall caused by individual convective cores. A supervised recurrent neural network based on long short-term memory with 3D spatial convolutions (RN3D) is used to account for the horizontal and vertical changes of the convective cells with a time resolution of 30 s. The model uses radar reflectivity at horizontal polarization ZH and the differential reflectivity. The input parameters are defined in a volume of 64 × 64 × 8 km3 with the lowest level at 1.9 km and a resolution of 0.4 × 0.4 × 0.25 km3. The prediction is a 10-min sequence of ZH at the lowest grid level. The model is trained with a large number of observations of summer 2020 and an adversarial technique. RN3D is tested with different types of rapidly evolving localized heavy rainfalls of summers 2018 and 2019. The model performance is compared to that of an advection model for 3D extrapolation of PAWR echoes (A3DM). RN3D better predicts the formation and dissipation of precipitation. However, RN3D tends to underestimate heavy rainfall especially when the storm is well developed. In this phase of the storm, A3DM nowcast scores are found slightly higher. The high skill of RN3D to predict the onset of sudden localized rainfall is illustrated with an example for which RN3D outperforms the operational precipitation nowcasting system of Japan Meteorological Agency (JMA).
Significance Statement
Temporal extrapolation of radar observations is a means of nowcasting sudden heavy rains, i.e., forecasts with a few tens of minutes and a high spatial resolution better than 500 m. They are necessary to set up warning systems to anticipate damage to infrastructure and reduce the fatalities these storms cause. It is a difficult task due to the storm suddenness, restricted area, and nonlinear behavior that are not well captured by current operational observation and numerical systems. In this study, we use a new high-resolution weather radar with polarimetric information and a 3D recurrent neural network to improve 10-min nowcasts, the current limit of operational systems. This is a first and essential step before applying such a method for increasing the prediction lead time.
Abstract
We present nowcasts of sudden heavy rains on meso-γ scales (2–20 km) using the high spatiotemporal resolution of a multiparameter phased-array weather radar (MP-PAWR) sensitive to rain droplets. The onset of typical storms is successfully predicted with 10-min lead time, i.e., the current predictability limit of rainfall caused by individual convective cores. A supervised recurrent neural network based on long short-term memory with 3D spatial convolutions (RN3D) is used to account for the horizontal and vertical changes of the convective cells with a time resolution of 30 s. The model uses radar reflectivity at horizontal polarization ZH and the differential reflectivity. The input parameters are defined in a volume of 64 × 64 × 8 km3 with the lowest level at 1.9 km and a resolution of 0.4 × 0.4 × 0.25 km3. The prediction is a 10-min sequence of ZH at the lowest grid level. The model is trained with a large number of observations of summer 2020 and an adversarial technique. RN3D is tested with different types of rapidly evolving localized heavy rainfalls of summers 2018 and 2019. The model performance is compared to that of an advection model for 3D extrapolation of PAWR echoes (A3DM). RN3D better predicts the formation and dissipation of precipitation. However, RN3D tends to underestimate heavy rainfall especially when the storm is well developed. In this phase of the storm, A3DM nowcast scores are found slightly higher. The high skill of RN3D to predict the onset of sudden localized rainfall is illustrated with an example for which RN3D outperforms the operational precipitation nowcasting system of Japan Meteorological Agency (JMA).
Significance Statement
Temporal extrapolation of radar observations is a means of nowcasting sudden heavy rains, i.e., forecasts with a few tens of minutes and a high spatial resolution better than 500 m. They are necessary to set up warning systems to anticipate damage to infrastructure and reduce the fatalities these storms cause. It is a difficult task due to the storm suddenness, restricted area, and nonlinear behavior that are not well captured by current operational observation and numerical systems. In this study, we use a new high-resolution weather radar with polarimetric information and a 3D recurrent neural network to improve 10-min nowcasts, the current limit of operational systems. This is a first and essential step before applying such a method for increasing the prediction lead time.
Abstract
This article presents a calibration transfer methodology that can be used between radars of the same or different frequency bands. This method enables the absolute calibration of a cloud radar by transferring it from another collocated instrument with known calibration, by simultaneously measuring vertical ice cloud reflectivity profiles. The advantage is that the added uncertainty in the newly calibrated instrument can converge to the magnitude of the reference instrument calibration. This is achieved by carefully selecting comparable data, including the identification of the reflectivity range that avoids the disparities introduced by differences in sensitivity or scattering regime. The result is a correction coefficient used to compensate measurement bias in the uncalibrated instrument. Calibration transfer uncertainty can be reduced by increasing the number of sampling periods. The methodology was applied between collocated W-band radars deployed during the ICE-GENESIS campaign (Switzerland 2020/21). A difference of 2.2 dB was found in their reflectivity measurements, with an uncertainty of 0.7 dB. The calibration transfer was also applied to radars of different frequency, an X-band radar with unknown calibration and a W-band radar with manufacturer calibration; the difference found was −16.7 dB with an uncertainty of 1.2 dB. The method was validated through closure, by transferring calibration between three different radars in two different case studies. For the first case, involving three W-band radars, the bias found was of 0.2 dB. In the second case, involving two W-band and one X-band radar, the bias found was of 0.3 dB. These results imply that the biases introduced by performing the calibration transfer with this method are negligible.
Abstract
This article presents a calibration transfer methodology that can be used between radars of the same or different frequency bands. This method enables the absolute calibration of a cloud radar by transferring it from another collocated instrument with known calibration, by simultaneously measuring vertical ice cloud reflectivity profiles. The advantage is that the added uncertainty in the newly calibrated instrument can converge to the magnitude of the reference instrument calibration. This is achieved by carefully selecting comparable data, including the identification of the reflectivity range that avoids the disparities introduced by differences in sensitivity or scattering regime. The result is a correction coefficient used to compensate measurement bias in the uncalibrated instrument. Calibration transfer uncertainty can be reduced by increasing the number of sampling periods. The methodology was applied between collocated W-band radars deployed during the ICE-GENESIS campaign (Switzerland 2020/21). A difference of 2.2 dB was found in their reflectivity measurements, with an uncertainty of 0.7 dB. The calibration transfer was also applied to radars of different frequency, an X-band radar with unknown calibration and a W-band radar with manufacturer calibration; the difference found was −16.7 dB with an uncertainty of 1.2 dB. The method was validated through closure, by transferring calibration between three different radars in two different case studies. For the first case, involving three W-band radars, the bias found was of 0.2 dB. In the second case, involving two W-band and one X-band radar, the bias found was of 0.3 dB. These results imply that the biases introduced by performing the calibration transfer with this method are negligible.
Abstract
Two multispectral satellite imagery products are presented that were developed for use within the fire management community. These products, which take the form of false color red–green–blue composites, were designed to aid fire detection and characterization, and for assessment of the environment surrounding a fire. The first, named the Fire Temperature RGB, uses spectral channels near 1.6, 2.2, and 3.9 μm for fire detection and rapid assessment of the range of fire intensity through intuitive coloration. The second, named the Day Fire RGB, uses spectral channels near 0.64, 0.86, and 3.9 μm for rapid scene assessment. The 0.64 μm channel provides information on smoke, the 0.86 μm channel provides information on vegetation health and burn scars, and the 3.9 μm channel provides active fire detections. Examples of these red–green–blue composite images developed from observations collected by three operational satellite imagers (VIIRS on the polar-orbiting platform and the Advanced Baseline Imager and Advanced Himawari Imager on the geostationary platform) demonstrate that both red–green–blue composites are useful for fire detection and contain valuable information that is not present within operational fire detection algorithms. In particular, it is shown that Fire Temperature RGB and Day Fire RGB images from VIIRS have similar utility for fire detection as the operational VIIRS Active Fire products, with the added benefit that the imagery provides context for more than just the fires themselves.
Significance Statement
The current generation of operational polar-orbiting weather satellites that began with the launch of Suomi NPP offers new capabilities with regard to fire detection and monitoring. In particular, false color red–green–blue composite imagery is now being used by fire managers, incident meteorologists, and others in the fire management community to visualize a fire’s behavior and the context in which it occurs. This paper outlines two of these red–green–blue composites that have gained widespread use throughout the U.S. National Weather Service and the Alaska Fire Service. These red–green–blue composites have been applied to the current generation of geostationary and polar-orbiting satellites to great effect and have changed how incident management teams respond to wildland fires.
Abstract
Two multispectral satellite imagery products are presented that were developed for use within the fire management community. These products, which take the form of false color red–green–blue composites, were designed to aid fire detection and characterization, and for assessment of the environment surrounding a fire. The first, named the Fire Temperature RGB, uses spectral channels near 1.6, 2.2, and 3.9 μm for fire detection and rapid assessment of the range of fire intensity through intuitive coloration. The second, named the Day Fire RGB, uses spectral channels near 0.64, 0.86, and 3.9 μm for rapid scene assessment. The 0.64 μm channel provides information on smoke, the 0.86 μm channel provides information on vegetation health and burn scars, and the 3.9 μm channel provides active fire detections. Examples of these red–green–blue composite images developed from observations collected by three operational satellite imagers (VIIRS on the polar-orbiting platform and the Advanced Baseline Imager and Advanced Himawari Imager on the geostationary platform) demonstrate that both red–green–blue composites are useful for fire detection and contain valuable information that is not present within operational fire detection algorithms. In particular, it is shown that Fire Temperature RGB and Day Fire RGB images from VIIRS have similar utility for fire detection as the operational VIIRS Active Fire products, with the added benefit that the imagery provides context for more than just the fires themselves.
Significance Statement
The current generation of operational polar-orbiting weather satellites that began with the launch of Suomi NPP offers new capabilities with regard to fire detection and monitoring. In particular, false color red–green–blue composite imagery is now being used by fire managers, incident meteorologists, and others in the fire management community to visualize a fire’s behavior and the context in which it occurs. This paper outlines two of these red–green–blue composites that have gained widespread use throughout the U.S. National Weather Service and the Alaska Fire Service. These red–green–blue composites have been applied to the current generation of geostationary and polar-orbiting satellites to great effect and have changed how incident management teams respond to wildland fires.