Search Results

You are looking at 1 - 10 of 19 items for

  • Author or Editor: JOHN HENDERSON x
  • Refine by Access: All Content x
Clear All Modify Search
ANDRÉ ROBERT
,
JOHN HENDERSON
, and
COLIN TURNBULL

Abstract

A semi-implicit time integration algorithm developed earlier for a barotropic model resulted in an appreciable economy of computing time. An extension of this method to baroclinic models is formulated, including a description of the various steps in the calculations. In the proposed scheme, the temperature is separated into a basic part dependent only on the vertical coordinate and a corresponding perturbation part. All terms involving the perturbation temperature are calculated from current values of the variables, while a centered finite-difference time average is applied to the horizontal pressure gradient, the divergence, and the vertical motion in the remaining terms. This method gives computationally stable integrations with relatively large time steps.

The model used to test the semi-implicit scheme does not include topography, precipitation, diabatic heating, and other important physical processes. Five-day hemispheric integrations from real data with time steps of 60 and 30 min show differences of the order of 3 m. These errors are insignificant when compared to other sources of error normally present in most numerical models. Presently, this model produces relatively good short-range predictions, and this is a strong factor in favor of inserting the major physical processes as soon as possible.

Full access
John M. Henderson
,
Gary M. Lackmann
, and
John R. Gyakum

Abstract

Hurricane Opal’s landfall in October 1995 forms the basis of a serious hurricane forecast problem—the potential for hurricane conditions over land with insufficient warning time. Official National Hurricane Center (NHC, a division of the Tropical Prediction Center) forecasts predicted landfall and passage inland over the eastern United States at a later time than observed because of underestimation of the northward component of the steering flow by the National Centers for Environmental Prediction’s (NCEP) operational models and other hurricane track models. The goal of this paper is to isolate the cause of the poor forecast of meridional storm motion in NCEP’s early Eta Model by using quasigeostrophic potential vorticity (QGPV) inversion. QGPV inversion permits decomposition of the steering flow into contributions from different synoptic-scale features.

The inversion procedure is applied to the Eta analysis and 48-h Eta forecast valid at 1200 UTC 5 October 1995. Analyses from the European Centre for Medium-Range Weather Forecasts form an independent comparison for the Eta Model forecasts and analyses. An extratropical cyclone to the northwest of Opal and a synoptic-scale ridge to the east are identified as being major contributors to the steering flow. The Eta Model underpredicted the intensity of the ridge positioned immediately downstream of the storm, resulting in a corresponding underprediction of the meridional steering flow by 5 m s−1.

It is hypothesized that the Eta Model underforecasted the magnitude and extent of Opal’s outflow, and subsequent interaction with the downstream ridge, largely due to the model’s inability to correctly represent the convection associated with the hurricane in both the analyses and forecasts. Underforecasted upper-tropospheric temperatures downstream of Opal are consistent with this hypothesis. Accurate initialization of the model in the region containing Opal may have been hampered by the dearth of upper-air data over the Gulf of Mexico. Failure to properly resolve the hurricane is hypothesized to have resulted in the underforecasting of the downstream ridge and its associated steering flow.

Full access
Craig Miller
,
John Holmes
,
David Henderson
,
John Ginger
, and
Murray Morrison

Abstract

The Dines pressure tube anemometer was the primary wind speed recording instrument used in Australia until it was replaced by Synchrotac cup anemometers in the 1990s. Simultaneous observations of the gust wind speeds recorded using both types of anemometers during tropical cyclones have, however, raised questions about the equivalency of the gust wind speeds recorded using the two instruments. An experimental study of the response of both versions of the Dines anemometer used in Australia shows that the response of the anemometer is dominated by the motion of the float manometer used to record the wind speed. The amplitude response function shows the presence of two resonant peaks, with the amplitude and frequency of the peaks depending on the instrument version and the mean wind speed. Comparison of the gust wind speeds recorded using Dines and Synchrotac anemometers using random process and linear system theory shows that, on average, the low-speed Dines anemometer records values 2%–5% higher than those recorded using a Synchrotac anemometer under the same conditions, while the high-speed Dines anemometer records values 3%–7% higher, depending on the mean wind speed and turbulence intensity. These differences are exacerbated with the adoption of the WMO-recommended 3-s moving average gust wind speed when reporting the Synchrotac anemometer gust wind speeds, rising to 6%–12% and 11%–19% for low- and high-speed Dines anemometers, respectively. These results are consistent with both field observations and an independent extreme value analysis of simultaneously observed gust wind speeds at seven sites in northern Australia.

Full access
Bryan K. Woods
,
Thomas Nehrkorn
, and
John M. Henderson

Abstract

A 31-yr time series of boundary layer winds has been developed for a region on the outer continental shelf. This simulated time series was designed to be suitable to study the wind resources for a potential offshore wind farm. Reanalysis data were used to initialize a series of high-resolution numerical simulations. The limited number of high-resolution numerical simulations was repeatedly sampled using an analog matching criterion that is based on the reanalysis data to create a 31-yr time series with 500-m spatial resolution and 10-min temporal resolution. Validation against buoy data indicates that combining the reanalysis and resampled high-resolution numerical simulations produces a much more accurate wind speed distribution than does the reanalysis alone. Both the model physics and downscaled resolution may be contributing to the observed performance gains.

Full access
Richard D. Rosen
,
John M. Henderson
, and
David A. Salstein

Abstract

As part of its mandate to oversee the design of measurement networks for future weather and climate observing needs, the North American Atmospheric Observing System (NAOS) program hypothesized that replacing some of the existing radiosonde stations in the continental United States (CONUS) with another observing system would have little impact on weather forecast accuracy. The consequences of this hypothesis for climate monitoring over North America (NA) are considered here by comparing estimates of multidecadal trends in seasonal mean 500-mb temperature (T) integrated regionally over CONUS or NA, made with and without the 14 upper-air stations initially targeted for replacement. The trend estimates are obtained by subsampling gridded reanalysis fields at points nearest the 78 (126) existing CONUS (NA) radiosonde stations and at these points less the 14 stations. Trends in T for CONUS and NA during each season are also estimated based on the full reanalysis grid, but regardless of the sampling strategy, differences in trends are small and statistically insignificant. A more extreme reduction of the existing radiosonde network is also considered here, namely, one associated with the Global Climate Observing System (GCOS), which includes only 6 (14) stations in CONUS (NA). Again, however, trends for CONUS or NA based on the GCOS sampling strategy are not significantly different from those based on the current network, despite the large difference in station coverage. Estimates of continental-scale trends in 500-mb temperature therefore appear to be robust, whether based on the existing North American radiosonde network or on a range of potential changes thereto. This result depends on the large spatial scale of the underlying tropospheric temperature trend field; other quantities of interest for climate monitoring may be considerably more sensitive to the number and distribution of upper-air stations.

Full access
David S. Henderson
,
Jason A. Otkin
, and
John R. Mecikalski

Abstract

The evolution of model-based cloud-top brightness temperatures (BT) associated with convective initiation (CI) is assessed for three bulk cloud microphysics schemes in the Weather Research and Forecasting Model. Using a composite-based analysis, cloud objects derived from high-resolution (500 m) model simulations are compared to 5-min GOES-16 imagery for a case study day located near the Alabama–Mississippi border. Observed and simulated cloud characteristics for clouds reaching CI are examined by utilizing infrared BTs commonly used in satellite-based CI nowcasting methods. The results demonstrate the ability of object-based verification methods with satellite observations to evaluate the evolution of model cloud characteristics, and the BT comparison provides insight into a known issue of model simulations producing too many convective cells reaching CI. The timing of CI from the different microphysical schemes is dependent on the production of ice in the upper levels of the cloud, which typically occurs near the time of maximum cloud growth. In particular, large differences in precipitation formation drive differences in the amount of cloud water able to reach upper layers of the cloud, which impacts cloud-top glaciation. Larger cloud mixing ratios are found in clouds with sustained growth leading to more cloud water lofted to the upper levels of the cloud and the formation of ice. Clouds unable to sustain growth lack the necessary cloud water needed to form ice and grow into cumulonimbus. Clouds with slower growth rates display similar BT trends as clouds exhibiting growth, which suggests that forecasting CI using geostationary satellites might require additional information beyond those derived at cloud top.

Full access
Thomas Nehrkorn
,
John Henderson
,
Mark Leidner
,
Marikate Mountain
,
Janusz Eluszkiewicz
,
Kathryn McKain
, and
Steven Wofsy

Abstract

A recent National Research Council report highlighted the potential utility of atmospheric observations and models for detecting trends in concentrated emissions from localized regions, such as urban areas. The Salt Lake City (SLC), Utah, area was chosen for a pilot study to determine the feasibility of using ground-based sensors to identify trends in anthropogenic urban emissions over a range of time scales (from days to years). The Weather Research and Forecasting model (WRF) was combined with a Lagrangian particle dispersion model and an emission inventory to model carbon dioxide (CO2) concentrations that can be compared with in situ measurements. An accurate representation of atmospheric transport requires a faithful modeling of the meteorological conditions. This study examines in detail the ability of different configurations of WRF to reproduce the observed local and mesoscale circulations, and the diurnal evolution of the planetary boundary layer (PBL) in the SLC area. Observations from the Vertical Transport and Mixing field experiment in 2000 were used to examine the sensitivity of WRF results to changes in the PBL parameterization and to the inclusion of an urban canopy model (UCM). Results show that for urban locations there is a clear benefit from parameterizing the urban canopy for simulation of the PBL and near-surface conditions, particularly for temperature evolution at night. Simulation of near-surface CO2 concentrations for a 2-week period in October 2006 showed that running WRF at high resolution (1.33 km) and with a UCM also improves the simulation of observed increases in CO2 during the early evening.

Full access
Ronald D. Leeper
,
John Kochendorfer
,
Timothy A. Henderson
, and
Michael A. Palecki

Abstract

A field experiment was performed in Oak Ridge, Tennessee, with four instrumented towers placed over grass at increasing distances (4, 30, 50, 124, and 300 m) from a built-up area. Stations were aligned in such a way to simulate the impact of small-scale encroachment on temperature observations. As expected, temperature observations were warmest for the site closest to the built environment with an average temperature difference of 0.31° and 0.24°C for aspirated and unaspirated sensors, respectively. Mean aspirated temperature differences were greater during the evening (0.47°C) than during the day (0.16°C). This was particularly true for evenings following greater daytime solar insolation (20+ MJ day−1) with surface winds from the direction of the built environment where mean differences exceeded 0.80°C. The impact of the built environment on air temperature diminished with distance with a warm bias only detectable out to tower B′ located 50 m away. The experimental findings were comparable to a known case of urban encroachment at a U.S. Climate Reference Network station in Kingston, Rhode Island. The experimental and operational results both lead to reductions in the diurnal temperature range of ~0.39°C for fan-aspirated sensors. Interestingly, the unaspirated sensor had a larger reduction in diurnal temperature range (DTR) of 0.48°C. These results suggest that small-scale urban encroachment within 50 m of a station can have important impacts on daily temperature extrema (maximum and minimum) with the magnitude of these differences dependent upon prevailing environmental conditions and sensing technology.

Full access
S. Mark Leidner
,
Thomas Nehrkorn
,
John Henderson
,
Marikate Mountain
,
Tom Yunck
, and
Ross N. Hoffman

Abstract

Global Navigation Satellite System (GNSS) radio occultations (RO) over the last 10 years have proved to be a valuable and essentially unbiased data source for operational global numerical weather prediction. However, the existing sampling coverage is too sparse in both space and time to support forecasting of severe mesoscale weather. In this study, the case study or quick observing system simulation experiment (QuickOSSE) framework is used to quantify the impact of vastly increased numbers of GNSS RO profiles on mesoscale weather analysis and forecasting. The current study focuses on a severe convective weather event that produced both a tornado and flash flooding in Oklahoma on 31 May 2013. The WRF Model is used to compute a realistic and faithful depiction of reality. This 2-km “nature run” (NR) serves as the “truth” in this study. The NR is sampled by two proposed constellations of GNSS RO receivers that would produce 250 thousand and 2.5 million profiles per day globally. These data are then assimilated using WRF and a 24-member, 18-km-resolution, physics-based ensemble Kalman filter. The data assimilation is cycled hourly and makes use of a nonlocal, excess phase observation operator for RO data. The assimilation of greatly increased numbers of RO profiles produces improved analyses, particularly of the lower-tropospheric moisture fields. The forecast results suggest positive impacts on convective initiation. Additional experiments should be conducted for different weather scenarios and with improved OSSE systems.

Full access
Keith D. Hutchison
,
Steve Marusa
,
John R. Henderson
,
Robert C. Kenley
,
Phillip C. Topping
,
William G. Uplinger
, and
John A. Twomey

Abstract

The National Polar-Orbiting Operational Environmental Satellite System (NPOESS) requires improved accuracy in the retrieval of sea surface skin temperature (SSTS) from its Visible Infrared Imager Radiometer Suite (VIIRS) sensor over the capability to retrieve bulk sea surface temperature (SSTB) that has been demonstrated with currently operational National Oceanic and Atmospheric Administration (NOAA) satellites carrying the Advanced Very High Resolution Radiometer (AVHRR) sensor. Statistics show an existing capability to retrieve SSTB with a 1σ accuracy of about 0.8 K in the daytime and 0.6 K with nighttime data. During the NPOESS era, a minimum 1σ SSTS measurement uncertainty of 0.5 K is required during daytime and nighttime conditions, while 0.1 K is desired. Simulations have been performed, using PACEOS™ scene generation software and the multichannel sea surface temperature (MCSST) algorithms developed by NOAA, to better understand the implications of this more stringent requirement on algorithm retrieval methodologies and system design concepts. The results suggest that minimum NPOESS SSTS accuracy requirements may be satisfied with sensor NEΔT values of approximately 0.12 K, which are similar to the AVHRR sensor design specifications. However, error analyses of retrieved SSTB from AVHRR imagery suggest that these more stringent NPOESS requirements may be difficult to meet with existing MCSST algorithms. Thus, a more robust algorithm, a new retrieval methodology, or more stringent system characteristics may be needed to satisfy SSTS measurement uncertainty requirements during the NPOESS era. It is concluded that system-level simulations must accurately model all relevant phenomenology and any new algorithm development should be referenced against in situ observations of ocean surface skin temperatures.

Full access