Search Results

You are looking at 21 - 30 of 93 items for

  • Author or Editor: David A. Marks x
  • All content x
Clear All Modify Search
Eraldo A. T. Matricardi, David L. Skole, Mark A. Cochrane, Jiaguo Qi, and Walter Chomentowski

Abstract

Selective logging degrades tropical forests. Logging operations vary in timing, location, and intensity. Evidence of this land use is rapidly obscured by forest regeneration and ongoing deforestation. A detailed study of selective logging operations was conducted near Sinop, State of Mato Grosso, Brazil, one of the key Amazonian logging centers. An 11-yr series of annual Lansdat images (1992–2002) was used to detect and track logged forests across the landscape. A semiautomated method was applied and compared to both visual interpretation and field data. Although visual detection provided precise delineation of some logged areas, it missed many areas. The semiautomated technique provided the best estimates of logging extent that are largely independent of potential user bias. Multitemporal analyses allowed the authors to analyze the annual variations in logging and deforestation, as well as the interaction between them. It is shown that, because of both rapid regrowth and deforestation, evidence of logging activities often disappeared within 1–3 yr. During the 1992–2002 interval, a total of 11 449 km2 of forest was selectively logged. Around 17% of these logged forests had been deforested by 2002. An intra-annual analysis was also conducted using four images spread over a single year. Nearly 3% of logged forests were rapidly deforested during the year in which logging occurred, indicating that even annual monitoring will underestimate logging extent. Great care will need to be taken when inferring logging rates from observations greater than a year apart because of the partial detection of previous years of logging activity.

Full access
Stuart A. Young, Mark A. Vaughan, Ralph E. Kuehn, and David M. Winker

Abstract

Profiles of atmospheric cloud and aerosol extinction coefficients are retrieved on a global scale from measurements made by the lidar on board the Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission since mid-June 2006. This paper presents an analysis of how the uncertainties in the inputs to the extinction retrieval algorithm propagate as the retrieval proceeds downward to lower levels of the atmosphere. The mathematical analyses, which are being used to calculate the uncertainties reported in the current (version 3) data release, are supported by figures illustrating the retrieval uncertainties in both simulated and actual data. Equations are also derived that describe the sensitivity of the extinction retrieval algorithm to errors in profile calibration and in the lidar ratios used in the retrievals. Biases that could potentially result from low signal-to-noise ratios in the data are also examined. Using simulated data, the propagation of bias errors resulting from errors in profile calibration and lidar ratios is illustrated.

Full access
Eyal Amitai, David A. Marks, David B. Wolff, David S. Silberstein, Brad L. Fisher, and Jason L. Pippitt

Abstract

Evaluation of the Tropical Rainfall Measuring Mission (TRMM) satellite observations is conducted through a comprehensive ground validation (GV) program. Since the launch of TRMM in late 1997, standardized instantaneous and monthly rainfall products are routinely generated using quality-controlled ground-based radar data adjusted to the gauge accumulations from four primary sites. As part of the NASA TRMM GV program, effort is being made to evaluate these GV products. This paper describes the product evaluation effort for the Melbourne, Florida, site. This effort allows us to evaluate the radar rainfall estimates, to improve the algorithms in order to develop better GV products for comparison with the satellite products, and to recognize the major limiting factors in evaluating the estimates that reflect current limitations in radar rainfall estimation. Lessons learned and suggested improvements from this 8-yr mission are summarized in the context of improving planning for future precipitation missions, for example, the Global Precipitation Measurement (GPM).

Full access
Ali Tokay, Leo Pio D’Adderio, David A. Marks, Jason L. Pippitt, David B. Wolff, and Walter A. Petersen

Abstract

The ground-based-radar-derived raindrop size distribution (DSD) parameters—mass-weighted drop diameter D mass and normalized intercept parameter N W—are the sole resource for direct validation of the National Aeronautics and Space Administration (NASA) Global Precipitation Measurement (GPM) mission Core Observatory satellite-based retrieved DSD. Both D mass and N W are obtained from radar-measured reflectivity Z H and differential reflectivity Z DR through empirical relationships. This study uses existing relationships that were determined for the GPM ground validation (GV) program and directly compares the NASA S-band polarimetric radar (NPOL) observables of Z H and Z DR and derived D mass and N W with those calculated by two-dimensional video disdrometer (2DVD). The joint NPOL and 2DVD datasets were acquired during three GPM GV field campaigns conducted in eastern Iowa, southern Appalachia, and western Washington State. The comparative study quantifies the level of agreement for Z H, Z DR, D mass, and log(N W) at an optimum distance (15–40 km) from the radar as well as at distances greater than 60 km from radar and over mountainous terrain. Interestingly, roughly 10%–15% of the NPOL Z HZ DR pairs were well outside the envelope of 2DVD-estimated Z HZ DR pairs. The exclusion of these pairs improved the comparisons noticeably.

Free access
Stephanie M. Wingo, Walter A. Petersen, Patrick N. Gatlin, Charanjit S. Pabla, David A. Marks, and David B. Wolff

Abstract

Researchers now have the benefit of an unprecedented suite of space- and ground-based sensors that provide multidimensional and multiparameter precipitation information. Motivated by NASA’s Global Precipitation Measurement (GPM) mission and ground validation objectives, the System for Integrating Multiplatform Data to Build the Atmospheric Column (SIMBA) has been developed as a unique multisensor precipitation data fusion tool to unify field observations recorded in a variety of formats and coordinate systems into a common reference frame. Through platform-specific modules, SIMBA processes data from native coordinates and resolutions only to the extent required to set them into a user-defined three-dimensional grid. At present, the system supports several ground-based scanning research radars, NWS NEXRAD radars, profiling Micro Rain Radars (MRRs), multiple disdrometers and rain gauges, soundings, the GPM Microwave Imager and Dual-Frequency Precipitation Radar on board the Core Observatory satellite, and Multi-Radar Multi-Sensor system quantitative precipitation estimates. SIMBA generates a new atmospheric column data product that contains a concomitant set of all available data from the supported platforms within the user-specified grid defining the column area in the versatile netCDF format. Key parameters for each data source are preserved as attributes. SIMBA provides a streamlined framework for initial research tasks, facilitating more efficient precipitation science. We demonstrate the utility of SIMBA for investigations, such as assessing spatial precipitation variability at subpixel scales and appraising satellite sensor algorithm representation of vertical precipitation structure for GPM Core Observatory overpass cases collected in the NASA Wallops Precipitation Science Research Facility and the GPM Olympic Mountain Experiment (OLYMPEX) ground validation field campaign in Washington State.

Full access
David A. Robinson, Mark C. Serreze, Roger G. Barry, Greg Scharfen, and George Kukla

Abstract

Visible-band satellite imagery is used to manually map surface brightness changes over sea ice throughout the Arctic Basin from May to mid-August over a 10-yr period. These brightness changes are primarily due to snowmelt atop the ice cover. Using image processor techniques, parameterized albedos are estimated for each brightness class. Snowmelt begins in May in the marginal seas, progressing northward with time, finally commencing near the pole in late June. large year-to-year differences are found in the timing of melt, exceeding one month in some regions. Parameterized albedo for most regions of the pack ice exceed 0.70 during May, declines rapidly during June, and reaches a seasonal low of between 0.40 and 0.50 by late July. For August, regional albedos, which also include areas of open water beyond the southern pack ice limit, are up to 0.16 lower than the corresponding values for pack ice areas only.

Full access
David Chapman, Mark A. Cane, Naomi Henderson, Dong Eun Lee, and Chen Chen

Abstract

The authors investigate a sea surface temperature anomaly (SSTA)-only vector autoregressive (VAR) model for prediction of El Niño–Southern Oscillation (ENSO). VAR generalizes the linear inverse method (LIM) framework to incorporate an extended state vector including many months of recent prior SSTA in addition to the present state. An SSTA-only VAR model implicitly captures subsurface forcing observable in the LIM residual as red noise. Optimal skill is achieved using a state vector of order 14–17 months in an exhaustive 120-yr cross-validated hindcast assessment. It is found that VAR outperforms LIM, increasing forecast skill by 3 months, in a 30-yr retrospective forecast experiment.

Full access
Witold F. Krajewski, Mark L. Morrissey, James A. Smith, and David T. Rexroth

Abstract

A Monte Carlo simulation study is conducted to investigate the performance of the area-threshold method of estimating mean areas rainfall. The study uses a stochastic space-time model of rainfall as the true rainfall-field generator. Simple schemes of simulating radar observations of the simulated rainfall fields are employed. The schemes address both random and systematic components of the radar rainfall-estimation process. The results of the area-threshold method are compared to the results based on conventional averaging of radar-estimated point rainfall observations. The results demonstrate that when the exponent parameter in the ZR relationship has small uncertainty (about ±10%), the conventional method works better than the area-threshold method. When the errors are higher (±20%), the area-threshold method with optimum threshold in the 5–10 mm h−1 range performs best. For even higher errors in the ZR relationship, the area-threshold method with a low threshold provides the best performance.

Full access
Jun A. Zhang, Robert F. Rogers, David S. Nolan, and Frank D. Marks Jr.

Abstract

In this study, data from 794 GPS dropsondes deployed by research aircraft in 13 hurricanes are analyzed to study the characteristic height scales of the hurricane boundary layer. The height scales are defined in a variety of ways: the height of the maximum total wind speed, the inflow layer depth, and the mixed layer depth. The height of the maximum wind speed and the inflow layer depth are referred to as the dynamical boundary layer heights, while the mixed layer depth is referred to as the thermodynamical boundary layer height. The data analyses show that there is a clear separation of the thermodynamical and dynamical boundary layer heights. Consistent with previous studies on the boundary layer structure in individual storms, the dynamical boundary layer height is found to decrease with decreasing radius to the storm center. The thermodynamic boundary layer height, which is much shallower than the dynamical boundary layer height, is also found to decrease with decreasing radius to the storm center. The results also suggest that using the traditional critical Richardson number method to determine the boundary layer height may not accurately reproduce the height scale of the hurricane boundary layer. These different height scales reveal the complexity of the hurricane boundary layer structure that should be captured in hurricane model simulations.

Full access
David J. Lorenz, Jason A. Otkin, Mark Svoboda, Christopher R. Hain, Martha C. Anderson, and Yafang Zhong

Abstract

The U.S. Drought Monitor (USDM) classifies drought into five discrete dryness/drought categories based on expert synthesis of numerous data sources. In this study, an empirical methodology is presented for creating a nondiscrete USDM index that simultaneously 1) represents the dryness/wetness value on a continuum and 2) is most consistent with the time scales and processes of the actual USDM. A continuous USDM representation will facilitate USDM forecasting methods, which will benefit from knowledge of where, within a discrete drought class, the current drought state most probably lies. The continuous USDM is developed such that the actual discrete USDM can be reconstructed by discretizing the continuous USDM based on the 30th, 20th, 10th, 5th, and 2nd percentiles—corresponding with USDM definitions for the D4–D0 drought classes. Anomalies in precipitation, soil moisture, and evapotranspiration over a range of different time scales are used as predictors to estimate the continuous USDM. The methodology is fundamentally probabilistic, meaning that the probability density function (PDF) of the continuous USDM is estimated and therefore the degree of uncertainty in the fit is properly characterized. Goodness-of-fit metrics and direct comparisons between the actual and predicted USDM analyses during different seasons and years indicate that this objective drought classification method is well correlated with the current USDM analyses. In Part II, this continuous USDM index will be used to improve intraseasonal USDM intensification forecasts because it is capable of distinguishing between USDM states that are either far from or near to the next-higher drought category.

Full access