Search Results

You are looking at 1 - 10 of 31 items for

  • Author or Editor: Steven D. Miller x
  • All content x
Clear All Modify Search
Curtis J. Seaman and Steven D. Miller

It has been found that the day/night band of the Visible Infrared Imaging Radiometer Suite is capable of observing rapid motions of the aurora. The images that led to this discovery are shown. Shifts in the apparent position of the aurora boundary between consecutive scans of the instrument, which occur ~1.79 s apart, allow the cross-track relative speed of the aurora to be calculated. The physical basis for these observations and the method for determining the speed of auroral motions are discussed. These new satellite observations compare favorably with ground-based measurements presented in previous studies.

Full access
Thomas F. Lee, Steven D. Miller, Carl Schueler, and Shawn Miller

Abstract

The Visible/Infrared Imager Radiometer Suite (VIIRS), scheduled to fly on the satellites of the National Polar-orbiting Operational Environmental Satellite System, will combine the missions of the Advanced Very High Resolution Radiometer (AVHRR), which flies on current National Oceanic and Atmospheric Administration satellites, and the Operational Linescan System aboard the Defense Meteorological Satellite Program satellites. VIIRS will offer a number of improvements to weather forecasters. First, because of a sophisticated downlink and relay system, VIIRS latencies will be 30 min or less around the globe, improving the timeliness and therefore the operational usefulness of the images. Second, with 22 channels, VIIRS will offer many more products than its predecessors. As an example, a true-color simulation is shown using data from the Earth Observing System’s Moderate Resolution Imaging Spectroradiometer (MODIS), an application current geostationary imagers cannot produce because of a missing “green” wavelength channel. Third, VIIRS images will have improved quality. Through a unique pixel aggregation strategy, VIIRS pixels will not expand rapidly toward the edge of a scan like those of MODIS or AVHRR. Data will retain nearly the same resolution at the edge of the swath as at nadir. Graphs and image simulations depict the improvement in output image quality. Last, the NexSat Web site, which provides near-real-time simulations of VIIRS products, is introduced.

Full access
Kyle A. Hilburn, Imme Ebert-Uphoff, and Steven D. Miller

Abstract

The objective of this research is to develop techniques for assimilating GOES-R series observations in precipitating scenes for the purpose of improving short-term convective-scale forecasts of high-impact weather hazards. Whereas one approach is radiance assimilation, the information content of GOES-R radiances from its Advanced Baseline Imager saturates in precipitating scenes, and radiance assimilation does not make use of lightning observations from the GOES Lightning Mapper. Here, a convolutional neural network (CNN) is developed to transform GOES-R radiances and lightning into synthetic radar reflectivity fields to make use of existing radar assimilation techniques. We find that the ability of CNNs to utilize spatial context is essential for this application and offers breakthrough improvement in skill compared to traditional pixel-by-pixel based approaches. To understand the improved performance, we use a novel analysis method that combines several techniques, each providing different insights into the network’s reasoning. Channel-withholding experiments and spatial information–withholding experiments are used to show that the CNN achieves skill at high reflectivity values from the information content in radiance gradients and the presence of lightning. The attribution method, layerwise relevance propagation, demonstrates that the CNN uses radiance and lightning information synergistically, where lightning helps the CNN focus on which neighboring locations are most important. Synthetic inputs are used to quantify the sensitivity to radiance gradients, showing that sharper gradients produce a stronger response in predicted reflectivity. Lightning observations are found to be uniquely valuable for their ability to pinpoint locations of strong radar echoes.

Open access
Steven D. Miller, Thomas F. Lee, and Robert L. Fennimore

Abstract

This paper presents two multispectral enhancement techniques for distinguishing between regions of cloud and snow cover using optical spectrum passive radiometer satellite observations from the Moderate Resolution Imaging Spectroradiometer (MODIS). Fundamental to the techniques are the 1.6- and 2.2-μm shortwave infrared bands that are useful in distinguishing between absorbing snow cover (having low reflectance) and less absorbing liquid-phase clouds (higher reflectance). The 1.38-μm band helps to overcome ambiguities that arise in the case of optically thin cirrus. Designed to provide straightforward, stand-alone environmental characterization for operational forecasters (e.g., military weather forecasters in the context of mission planning), these products portray the information that is contained within complex scenes as value-added, readily interpretable imagery at the highest available spatial resolution. Their utility in scene characterization and quality control of digital snow maps is demonstrated.

Full access
Daniel T. Lindsey, Steven D. Miller, and Louie Grasso
Full access
Steven J. Fletcher, Glen E. Liston, Christopher A. Hiemstra, and Steven D. Miller

Abstract

In this paper four simple computationally inexpensive, direct insertion data assimilation schemes are presented, and evaluated, to assimilate Moderate Resolution Imaging Spectroradiometer (MODIS) snow cover, which is a binary observation, and Advanced Microwave Scanning Radiometer for Earth Observing System (EOS) (AMSR-E) snow water equivalent (SWE) observations, which are at a coarser resolution than MODIS, into a numerical snow evolution model. The four schemes are 1) assimilate MODIS snow cover on its own with an arbitrary 0.01 m added to the model cells if there is a difference in snow cover; 2) iteratively change the model SWE values to match the AMSR-E equivalent value; 3) AMSR-E scheme with MODIS observations constraining which cells can be changed, when both sets of observations are available; and 4) MODIS-only scheme when the AMSR-E observations are not available, otherwise scheme 3. These schemes are used in the winter of 2006/07 over the southeast corner of Colorado and the tri-state area: Wyoming, Colorado, and Nebraska. It is shown that the inclusion of MODIS data enables the model in the north domain to have a 15% improvement in number of days with a less than 10% disagreement with the MODIS observation 24 h later and approximately 5% for the south domain. It is shown that the AMSR-E scheme has more of an impact in the south domain than the north domain. The assimilation results are also compared to station snow-depth data in both domains, where there is up-to-a-factor-of-5 underestimation of snow depth by the assimilation schemes compared with the station data but the snow evolution is fairly consistent.

Full access
Richard L. Bankert, Cristian Mitrescu, Steven D. Miller, and Robert H. Wade

Abstract

Cloud-type classification based on multispectral satellite imagery data has been widely researched and demonstrated to be useful for distinguishing a variety of classes using a wide range of methods. The research described here is a comparison of the classifier output from two very different algorithms applied to Geostationary Operational Environmental Satellite (GOES) data over the course of one year. The first algorithm employs spectral channel thresholding and additional physically based tests. The second algorithm was developed through a supervised learning method with characteristic features of expertly labeled image samples used as training data for a 1-nearest-neighbor classification. The latter’s ability to identify classes is also based in physics, but those relationships are embedded implicitly within the algorithm. A pixel-to-pixel comparison analysis was done for hourly daytime scenes within a region in the northeastern Pacific Ocean. Considerable agreement was found in this analysis, with many of the mismatches or disagreements providing insight to the strengths and limitations of each classifier. Depending upon user needs, a rule-based or other postprocessing system that combines the output from the two algorithms could provide the most reliable cloud-type classification.

Full access
Steven D. Miller, Fang Wang, Ann B. Burgess, S. McKenzie Skiles, Matthew Rogers, and Thomas H. Painter

Abstract

Runoff from mountain snowpack is an important freshwater supply for many parts of the world. The deposition of aeolian dust on snow decreases snow albedo and increases the absorption of solar irradiance. This absorption accelerates melting, impacting the regional hydrological cycle in terms of timing and magnitude of runoff. The Moderate Resolution Imaging Spectroradiometer (MODIS) Dust Radiative Forcing in Snow (MODDRFS) satellite product allows estimation of the instantaneous (at time of satellite overpass) surface radiative forcing caused by dust. While such snapshots are useful, energy balance modeling requires temporally resolved radiative forcing to represent energy fluxes to the snowpack, as modulated primarily by varying cloud cover. Here, the instantaneous MODDRFS estimate is used as a tie point to calculate temporally resolved surface radiative forcing. Dust radiative forcing scenarios were considered for 1) clear-sky conditions and 2) all-sky conditions using satellite-based cloud observations. Comparisons against in situ stations in the Rocky Mountains show that accounting for the temporally resolved all-sky solar irradiance via satellite retrievals yields a more representative time series of dust radiative effects compared to the clear-sky assumption. The modeled impact of dust on enhanced snowmelt was found to be significant, accounting for nearly 50% of the total melt at the more contaminated station sites. The algorithm is applicable to regional basins worldwide, bearing relevance to both climate process research and the operational management of water resources.

Full access
Steven D. Miller, Daniel T. Lindsey, Curtis J. Seaman, and Jeremy E. Solbrig

Abstract

Value-added imagery is a useful means of communicating multispectral environmental satellite radiometer data to the human analyst. The most effective techniques strike a balance between science and art. The science side requires engineering physical algorithms capable of distilling the complex scene into a reduced set of key parameters. The artistic side involves design and construction of visually intuitive displays that maximize information content within the product image. The utility of such imagery to human analysts depends on the extent to which parameters or features of interest are conveyed unambiguously. Here, we detail and demonstrate a dynamic blended imagery technique, based on spatially variant transparency factors whose values are tied to algorithmically isolated parameters. The technique enables seamless display of multivariate information, and is applicable to any imaging system based on red–green–blue composites. We illustrate this technique in the context of GeoColor—an application of the Geostationary Operational Environmental Satellite R (GOES-R) series Advanced Baseline Imager (ABI) supporting operational forecasting and used widely in public communication of weather information.

Open access
Mark A. Broomhall, Leon J. Majewski, Vincent O. Villani, Ian F. Grant, and Steven D. Miller

Abstract

Observations of top-of-atmosphere radiances from the Advanced Himawari Imager (AHI) blue, green, and red spectral bands can be used to produce high-temporal-resolution, true-color imagery at 1-km spatial resolution over the Asia–Pacific region. To enhance interpretability and aesthetic appearance of these images, the top-of-atmosphere radiance data are processed to remove the Rayleigh-scattered atmospheric component, corrected for limb effects, blended with brightness temperature data from a thermal infrared window band at night, and the resultant imagery adjusted to optimize contrast. The contribution of Rayleigh scattering to the AHI observations is calculated by interpolating radiative transfer parameters from a preconstructed set of lookup tables, which are specifically created for the Himawari-8 AHI instrument. A surface reflectance value for each pixel is calculated after the Rayleigh contribution is removed. The spectrally dependent reflectance values produced from the lookup table differ from the exact calculation by up to 18% at the planetary limb, over 100% at the solar terminator, and by less than 0.5% at low to moderate solar and sensor zenith angles. The subsequent corrections applied for limb effects mitigate the areas with high interpolation error, which slightly reduces the spatial coverage, but provides Rayleigh-corrected surface reflectance products that have interpolation errors at or below 0.5%. Resolution sharpening increases the nominal pixel size from 1000 to 500 m while still producing sharp images. The resultant images are colorful, visually intuitive, high contrast, and of sufficient spatial and temporal resolution to provide a unique and complementary observational tool for use by weather forecasters and the general public alike.

Full access