Search Results

You are looking at 1 - 10 of 89 items for

  • Author or Editor: David D. Turner x
  • Refine by Access: All Content x
Clear All Modify Search
Jeffrey D. Duda
and
David D. Turner

Abstract

The object-based verification procedure described in a recent paper by Duda and Turner was expanded herein to compare forecasts of composite reflectivity and 6-h precipitation objects between the two most recent operational versions of the High-Resolution Rapid Refresh (HRRR) model, versions 3 and 4, over an expanded set of warm season cases in 2019 and 2020. In addition to analyzing all objects, a reduced set of forecast–observation object pairs was constructed by taking the best forecast match to a given observation object for the purposes of bias-reduction and unequivocal object comparison. Despite the apparent signal of improved scalar metrics such as the object-based threat score in HRRRv4 compared to HRRRv3, no statistically significant differences were found between the models. Nonetheless, many object attribute comparisons revealed indications of improved forecast performance in HRRRv4 compared to HRRRv3. For example, HRRRv4 had a reduced overforecasting bias for medium- and large-sized reflectivity objects, and all objects during the afternoon. HRRRv4 also better replicated the distribution of object complexity and aspect ratio. Results for 6-h precipitation also suggested superior performance in HRRRv4 over HRRRv3. However, HRRRv4 was worse with centroid displacement errors and more severely overforecast objects with a high maximum precipitation amount. Overall, this exercise revealed multiple forecast deficiencies in the HRRR, which enables developers to direct development efforts on detailed and specific endeavors to improve model forecasts.

Significance Statement

This work builds upon the authors’ prior work in assessing model forecast quality using an alternative verification method—object-based verification. In this paper we verified two versions of the same model (one an upgrade from the other) that were making forecasts covering the same time window, using the object-based verification method. We found that the updated model was not statistically significantly better, although there were indications it performed better in certain aspects such as capturing the change in the number of storms during the daytime. We were able to identify specific problem areas in the models, which helps us direct model developers in their efforts to further improve the model.

Restricted access
Jeffrey D. Duda
and
David D. Turner

Abstract

The Method of Object-based Diagnostic Evaluation (MODE) is used to perform an object-based verification of approximately 1400 forecasts of composite reflectivity from the operational HRRR during April–September 2019. In this study, MODE is configured to prioritize deep, moist convective storm cells typical of those that produce severe weather across the central and eastern United States during the warm season. In particular, attributes related to distance and size are given the greatest attribute weights for computing interest in MODE. HRRR tends to overforecast all objects, but substantially overforecasts both small objects at low-reflectivity thresholds and large objects at high-reflectivity thresholds. HRRR tends to either underforecast objects in the southern and central plains or has a correct frequency bias there, whereas it overforecasts objects across the southern and eastern United States. Attribute comparisons reveal the inability of the HRRR to fully resolve convective-scale features and the impact of data assimilation and loss of skill during the initial hours of the forecasts. Scalar metrics are defined and computed based on MODE output, chiefly relying on the interest value. The object-based threat score (OTS), in particular, reveals similar performance of HRRR forecasts as does the Heidke skill score, but with differing magnitudes, suggesting value in adopting an object-based approach to forecast verification. The typical distance between centroids of objects is also analyzed and shows gradual degradation with increasing forecast length.

Full access
Sergey Y. Matrosov
and
David D. Turner

Abstract

A remote sensing method to retrieve the mean temperature of cloud liquid using ground-based microwave radiometer measurements is evaluated and tested by comparisons with direct cloud temperature information inferred from ceilometer cloud-base measurements and temperature profiles from radiosonde soundings. The method is based on the dependence of the ratio of cloud optical thicknesses at W-band (~90 GHz) and Ka-band (~30 GHz) frequencies on cloud liquid temperature. This ratio is obtained from total optical thicknesses inferred from radiometer measurements of brightness temperatures after accounting for the contributions from oxygen and water vapor. This accounting is done based on the radiometer-based retrievals of integrated water vapor amount and temperature and pressure measurements at the surface. The W–Ka-band ratio method is applied to the measurements from a three-channel (90, 31.4, and 23.8 GHz) microwave radiometer at the U.S. Department of Energy Atmospheric Radiation Measurement Mobile Facility at Oliktok Point, Alaska. The analyzed events span conditions from warm stratus clouds with temperatures above freezing to mixed-phase clouds with supercooled liquid water layers. Intercomparisons of radiometer-based cloud liquid temperature retrievals with estimates from collocated ceilometer and radiosonde measurements indicated on average a standard deviation of about 3.5°C between the two retrieval types in a wide range of cloud temperatures, from warm liquid clouds to mixed-phase clouds with supercooled liquid and liquid water paths greater than 50 g m−2. The three-channel microwave radiometer–based method has a broad applicability, since it requires neither the use of active sensors to locate the boundaries of liquid cloud layers nor information on the vertical profile of temperature.

Full access
P. Jonathan Gero
and
David D. Turner

Abstract

A trend analysis was applied to a 14-yr time series of downwelling spectral infrared radiance observations from the Atmospheric Emitted Radiance Interferometer (AERI) located at the Atmospheric Radiation Measurement Program (ARM) site in the U.S. Southern Great Plains. The highly accurate calibration of the AERI instrument, performed every 10 min, ensures that any statistically significant trend in the observed data over this time can be attributed to changes in the atmospheric properties and composition, and not to changes in the sensitivity or responsivity of the instrument. The measured infrared spectra, numbering more than 800 000, were classified as clear-sky, thin cloud, and thick cloud scenes using a neural network method. The AERI data record demonstrates that the downwelling infrared radiance is decreasing over this 14-yr period in the winter, summer, and autumn seasons but it is increasing in the spring; these trends are statistically significant and are primarily due to long-term change in the cloudiness above the site. The AERI data also show many statistically significant trends on annual, seasonal, and diurnal time scales, with different trend signatures identified in the separate scene classifications. Given the decadal time span of the dataset, effects from natural variability should be considered in drawing broader conclusions. Nevertheless, this dataset has high value owing to the ability to infer possible mechanisms for any trends from the observations themselves and to test the performance of climate models.

Full access
Aronne Merrelli
and
David D. Turner

Abstract

The information content of high-spectral-resolution midinfrared (MIR; 650–2300 cm−1) and far-infrared (FIR; 200–685 cm−1) upwelling radiance spectra is calculated for clear-sky temperature and water vapor profiles. The wavenumber ranges of the two spectral bands overlap at the central absorption line in the CO2 ν 2 absorption band, and each contains one side of the full absorption band. Each spectral band also includes a water vapor absorption band; the MIR contains the first vibrational–rotational absorption band, while the FIR contains the rotational absorption band. The upwelling spectral radiances are simulated with the line-by-line radiative transfer model (LBLRTM), and the retrievals and information content analysis are computed using standard optimal estimation techniques. Perturbations in the surface temperature and in the trace gases methane, ozone, and nitrous oxide (CH4, O3, and N2O) are introduced to represent forward-model errors. Each spectrum is observed by a simulated infrared spectrometer, with a spectral resolution of 0.5 cm−1, with realistic spectrally varying sensor noise levels. The modeling and analysis framework is applied identically to each spectral range, allowing a quantitative comparison. The results show that for similar sensor noise levels, the FIR shows an advantage in water vapor profile information content and less sensitivity to forward-model errors. With a higher noise level in the FIR, which is a closer match to current FIR detector technology, the FIR information content drops and shows a disadvantage relative to the MIR.

Full access
Eric Gilleland
,
Domingo Muñoz-Esparza
, and
David D. Turner

Abstract

When testing hypotheses about which of two competing models is better, say A and B, the difference is often not significant. An alternative, complementary approach, is to measure how often model A is better than model B regardless of how slight or large the difference. The hypothesis concerns whether or not the percentage of time that model A is better than model B is larger than 50%. One generalized test statistic that can be used is the power-divergence test, which encompasses many familiar goodness-of-fit test statistics, such as the loglikelihood-ratio and Pearson X 2 tests. Theoretical results justify using the χ k 1 2 distribution for the entire family of test statistics, where k is the number of categories. However, these results assume that the underlying data are independent and identically distributed, which is often violated. Empirical results demonstrate that the reduction to two categories (i.e., model A is better than model B versus model B is better than A) results in a test that is reasonably robust to even severe departures from temporal independence, as well as contemporaneous correlation. The test is demonstrated on two different example verification sets: 6-h forecasts of eddy dissipation rate (m2/3 s−1) from two versions of the Graphical Turbulence Guidance model and for 12-h forecasts of 2-m temperature (°C) and 10-m wind speed (m s−1) from two versions of the High-Resolution Rapid Refresh model. The novelty of this paper is in demonstrating the utility of the power-divergence statistic in the face of temporally dependent data, as well as the emphasis on testing for the “frequency-of-better” alongside more traditional measures.

Open access
Véronique Meunier
,
David D. Turner
, and
Pavlos Kollias

Abstract

Two-dimensional water vapor fields were retrieved by simulated measurements from multiple ground-based microwave radiometers using a tomographic approach. The goal of this paper was to investigate how the various aspects of the instrument setup (number and spacing of elevation angles and of instruments, number of frequencies, etc.) affected the quality of the retrieved field. This was done for two simulated atmospheric water vapor fields: 1) an exaggerated turbulent boundary layer and 2) a simplified water vapor front. An optimal estimation algorithm was used to obtain the tomographic field from the microwave radiometers and to evaluate the fidelity and information content of this retrieved field.

While the retrieval of the simplified front was reasonably successful, the retrieval could not reproduce the details of the turbulent boundary layer field even using up to nine instruments and 25 elevation angles. In addition, the vertical profile of the variability of the water vapor field could not be captured. An additional set of tests was performed using simulated data from a Raman lidar. Even with the detailed lidar measurements, the retrieval did not succeed except when the lidar data were used to define the a priori covariance matrix. This suggests that the main limitation to obtaining fine structures in a retrieved field using tomographic retrievals is the definition of the a priori covariance matrix.

Full access
David D. Turner
,
P. Jonathan Gero
, and
David C. Tobin
Full access
Joseph Sedlar
,
Laura D. Riihimaki
,
Kathleen Lantz
, and
David D. Turner

Abstract

Various methods have been developed to characterize cloud type, otherwise referred to as cloud regime. These include manual sky observations, combining radiative and cloud vertical properties observed from satellite, surface-based remote sensing, and digital processing of sky imagers. While each method has inherent advantages and disadvantages, none of these cloud-typing methods actually includes measurements of surface shortwave or longwave radiative fluxes. Here, a method that relies upon detailed, surface-based radiation and cloud measurements and derived data products to train a random-forest machine-learning cloud classification model is introduced. Measurements from five years of data from the ARM Southern Great Plains site were compiled to train and independently evaluate the model classification performance. A cloud-type accuracy of approximately 80% using the random-forest classifier reveals that the model is well suited to predict climatological cloud properties. Furthermore, an analysis of the cloud-type misclassifications is performed. While physical cloud types may be misreported, the shortwave radiative signatures are similar between misclassified cloud types. From this, we assert that the cloud-regime model has the capacity to successfully differentiate clouds with comparable cloud–radiative interactions. Therefore, we conclude that the model can provide useful cloud-property information for fundamental cloud studies, inform renewable energy studies, and be a tool for numerical model evaluation and parameterization improvement, among many other applications.

Open access
Ryan Lagerquist
,
David D. Turner
,
Imme Ebert-Uphoff
, and
Jebb Q. Stewart

Abstract

Radiative transfer (RT) is a crucial but computationally expensive process in numerical weather/climate prediction. We develop neural networks (NN) to emulate a common RT parameterization called the Rapid Radiative Transfer Model (RRTM), with the goal of creating a faster parameterization for the Global Forecast System (GFS) v16. In previous work we emulated a highly simplified version of the shortwave RRTM only—excluding many predictor variables, driven by Rapid Refresh forecasts interpolated to a consistent height grid, using only 30 sites in the Northern Hemisphere. In this work we emulate the full shortwave and longwave RRTM—with all predictor variables, driven by GFSv16 forecasts on the native pressure–sigma grid, using data from around the globe. We experiment with NNs of widely varying complexity, including the U-net++ and U-net3+ architectures and deeply supervised training, designed to ensure realistic and accurate structure in gridded predictions. We evaluate the optimal shortwave NN and optimal longwave NN in great detail—as a function of geographic location, cloud regime, and other weather types. Both NNs produce extremely reliable heating rates and fluxes. The shortwave NN has an overall RMSE/MAE/bias of 0.14/0.08/−0.002 K day−1 for heating rate and 6.3/4.3/−0.1 W m−2 for net flux. Analogous numbers for the longwave NN are 0.22/0.12/−0.0006 K day−1 and 1.07/0.76/+0.01 W m−2. Both NNs perform well in nearly all situations, and the shortwave (longwave) NN is 7510 (90) times faster than the RRTM. Both will soon be tested online in the GFSv16.

Significance Statement

Radiative transfer is an important process for weather and climate. Accurate radiative transfer models exist, such as the RRTM, but these models are computationally slow. We develop neural networks (NNs), a type of machine learning model that is often computationally fast after training, to mimic the RRTM. We wish to accelerate the RRTM by orders of magnitude without sacrificing much accuracy. We drive both the NNs and RRTM with data from the GFSv16, an operational weather model, using locations around the globe during all seasons. We show that the NNs are highly accurate and much faster than the RRTM, which suggests that the NNs could be used to solve radiative transfer inside the GFSv16.

Restricted access