Search Results

You are looking at 1 - 10 of 21 items for

  • Author or Editor: Paul Schultz x
  • Refine by Access: All Content x
Clear All Modify Search
Paul Schultz

Abstract

In anticipation of computers that will be able to run weather forecasting models on very fine grids fast enough for real-time purposes, an algorithm for representing water phase change and precipitation processes was developed. The design criteria guiding this development are sufficiency (i.e., providing the required services), computational efficiency, and compatibility within four-dimensional data assimilation systems. The implication of the last criterion is that the model's moisture variables need to be consistent with observable or inferable cloud properties.

The algorithm is compared with a well-documented research microphysics algorithm in terms of computing efficiency and agreement with observations. The weather forecasting model runs much faster using the new package instead of the research algorithm. The agreement between the new algorithm and the research algorithm is much better than the agreement between the observations and the results from either algorithm, which suggests that errors in observations or other errors in the model runs (initialization, boundary conditions) are larger sources of error than the microphysics representation.

Full access
Paul Schultz

Abstract

No abstract available

Full access
Paul Schultz

Abstract

Seven familiar stability indices were computed from sounding data for each of 83 days of a convection forecasting experiment conducted during the summer of 1985 in northeast Colorado. Observations of convectively driven weather events were collected; the values of the indices were compared against this dataset to examine their performance as predictors of severe weather (large hail, tornadoes, high wind) and significant weather (nonsevere but important from an economic or public safety standpoint). The results of the analysis are

  1. Benchmark values of the indices that give their typical magnitudes on active days versus quiescent days. These values, compared with those computed in other regions, illustrate the potential fallacy of interpreting the indices in the absence of analogous region-specific reference statistics.

  2. Rankings that determine which indices worked best in this experiment. The highest ranked indices were the SWEAT index for severe weather and buoyancy for significant weather. Interestingly, SWEAT was the worst of those tested for significant weather.

  3. Quantitative convection forecasting guidance. The observed relative frequencies of severe and significant convection as functions of the seven indices are presented in graphical form. When used in a forecasting context, these observed relative frequencies can be interpreted as probabilities of severe and/or significant weather. Some of the graphs are clearly bimodal; no explanation for this behavior is offered.

Some of the benefits that would be realized by collecting more data, in this and other regions, are suggested. For example, there is a good possibility that some indices show particular skill for certain types of events (e.g., hail vs high wind, etc.), but the present dataset is too small to clearly establish any such connections.

Full access
Paul Schultz and Thomas T. Warner

Abstract

A cross-sectional numerical primitive-equation model is used to simulate the summertime airflow pattern in the Los Angeles basin for calm synoptic-scale wind conditions. The contributions of the sea breeze, the urban heat island effect and the mountain-valley wind are quantified. The mountain-valley and sea-breeze circulations are of the same sense (landward at the surface, toward water aloft) and strength (maximum of 5-10 m s−1 at surface), but the urban heat island effect is negligible. Correct specification of the land surface characteristics is found to be important to the quality of the simulation.

Model output is then used to calculate estimates of the space and time variation of boundary-layer ventilation. Ventilation, defined as the product of the height of the planetary boundary layer and the mean wind speed therein, is found to be enhanced in the vicinity of the sea breeze front, and generally increases with distance from the ocean. In the stable marine air layer behind the front, the ventilation is especially low.

Full access
Paul Schultz and Marcia K. Politovich

Abstract

An automated procedure is developed for detecting and forecasting atmospheric conditions conductive to aircraft icing over the continental United States. The procedure uses gridded output from the Nested-Grid Model, and is based on the manual techniques currently in use at the National Aviation Weather Advisory Unit in Kansas City, Missouri.

Verification of the procedure suggests forecasting performance on par with the human forecasters. Unfortunately, efforts at more-rigorous performance analysis are hindered by the inadequacies of the verification database, which consists of pilots’ subjective reports of airframe ice buildup. In general, no-ice conditions are not reported.

The physics of aircraft icing are reviewed, and the current manual techniques are discussed. The automated procedure provides an infrastructure for implementing incremental improvements in the algorithm as observations and numerical models improve.

Full access
David M. Schultz and Paul J. Roebber

Abstract

Over 50 yr have passed since the publication of Sanders' 1955 study, the first quantitative study of the structure and dynamics of a surface cold front. The purpose of this chapter is to reexamine some of the results of that study in light of modern methods of numerical weather prediction and diagnosis. A simulation with a resolution as high as 6-km horizontal grid spacing was performed with the fifth-generation-Pennsylvania State University-National Center for Atmospheric Research (PSU-NCAR) Mesoscale Model (MM5), given initial and lateral boundary conditions from the National Centers for Environmental Precipitation-National Center for Atmospheric Research (NCEP-NCAR) reanalysis project data from 17 to 18 April 1953. The MM5 produced a reasonable simulation af the front, albeit its strength was not as intense and its movement was not as fast as was analyzed by Sanders. The vertical structure of the front differed from that analyzed by Sanders in several significant ways. First, the strongest horizontal temperature gradient associated with the cold front in the simulation occurred above a surface-based inversion, not at the earth's surface. Second, the ascent plume at the leading edge of the front was deeper and more intense than that analyzed by Sanders. The reason was an elevated mixed layer that had moved over the surface cold front in the simulation, allowing a much deeper vertical circulation than was analyzed by Sanders. This structure is similar to that of Australian cold fronts with their deep, well-mixed, prefrontal surface layer. These two differences between the model simulation and the analysis by Sanders may be because upper-air data from Fort Worth, Texas, was unavailable to Sanders. Third, the elevated mixed layer also meant that isentropes along the leading edge of the front extended vertically. Fourth, the field of frontogenesis of the horizontal temperature gradient calculated from the three-dimensional wind differed in that the magnitude of the maximum of the deformation term was larger than the magnitude of the maximum of the tilting term in the simulation, in contrast to Sanders' analysis and other previously published cases. These two discrepancies may be attributable to the limited horizontal resolution of the data that Sanders used in constructing his cross section. Last, a deficiency of the model simulation was that the postfrontal surface superadiabatic layer in the model did not match the observed well-mixed boundary layer. This result raises the question of the origin of the well-mixed postfrontal boundary layer behind cold fronts. To address this question, an additional model simulation without surface fluxes was performed, producing a well-mixed, not superadiabatic, layer. This result suggests that surface fluxes were not necessary for the development of the well-mixed layer, in agreement with previous research. Analysis of this event also amplifies two research themes that Sanders returned to later in his career, First, a prefrontal wind shift occurred in both the observations and model simulation at stations in western Oklahoma. This prefrontal wind shift was caused by a lee cyclone departing the leeward slopes of the Rockies slightly equatorward of the cold front, rather than along the front as was the case farther eastward. Sanders' later research showed how the occurrence of these prefrontal wind shifts leads to the weakening of fronts. Second, this study shows the advantage of using surface potential temperature, rather than surface temperature, for determining the locations of the surface fronts on sloping terrain.

Full access
Paul J. Roebber, David M. Schultz, and Romualdo Romero

Abstract

Despite the relatively successful long-lead-time forecasts of the storms during the 3 May 1999 tornadic outbreak in Oklahoma and Kansas, forecasters were unable to predict with confidence details concerning convective initiation and convective mode. The forecasters identified three synoptic processes they were monitoring for clues as to how the event would unfold. These elements were (a) the absence of strong surface convergence along a dryline in western Oklahoma and the Texas Panhandle, (b) the presence of a cirrus shield that was hypothesized to limit surface heating, and (c) the arrival into Oklahoma of an upper-level wind speed maximum [associated with the so-called southern potential vorticity (PV) anomaly] that was responsible for favorable synoptic-scale ascent and the cirrus shield. The Pennsylvania State University–National Center for Atmospheric Research Fifth-Generation Mesoscale Model (MM5), nested down to 2-km horizontal grid spacing, is used in forecast mode [using the data from the National Centers for Environmental Prediction Aviation (AVN) run of the Global Spectral Model to provide initial and lateral boundary conditions] to explore the sensitivity of the outbreak to these features. A 30-h control simulation is compared with the available observations and captures important qualitative characteristics of the event, including convective initiation east of the dryline and organization of mesoscale convective systems into long-lived, long-track supercells. Additional simulations in which the initial strength of the southern PV anomaly is altered suggest that synoptic regulation of the 3 May 1999 event was imposed by the effects of the southern PV anomaly. The model results indicate that 1) convective initiation in the weakly forced environment was achieved through modification of the existing cap through both surface heating and synoptic-scale ascent associated with the southern PV anomaly; 2) supercellular organization was supported regardless of the strength of the southern PV anomaly, although weak-to-moderate forcing from this feature was most conducive to the production of long-lived supercells and strong forcing resulted in a trend toward linear mesoscale convective systems; and 3) the cirrus shield was important in limiting development of convection and reducing competition between storms. The implications of these results for the use of high-resolution models in operational forecasting environments are discussed. The model information provides potentially useful information to forecasters following the scientific forecast process, most particularly by assisting in the revision of conceptual ideas about the evolution of the outbreak. Substantial obstacles to operational implementation of such tools remain, however, including lack of model context (e.g., information concerning model biases), insufficient real-time observations to assess effectively model prediction details from the synoptic to the mesoscale, inconsistent forecaster education, and inadequate technology to support rapid scientific discovery in an operational setting.

Full access
David M. Schultz, Christopher C. Weiss, and Paul M. Hoffman

Abstract

To investigate the role of synoptic-scale processes in regulating the strength of the dryline, a dataset is constructed of all drylines occurring within the West Texas Mesonet (WTM) during April, May, and June of 2004 and 2005. In addition, dewpoint and wind data were collected from stations on the western (Morton; MORT) and eastern (Paducah; PADU) periphery of the WTM domain (230 km across), generally oriented east–west across the typical location of the dryline in west Texas. Drylines were characterized by two variables: the difference in dewpoint between MORT and PADU (hereafter, dryline intensity) and the difference in the eastward component of the wind between MORT and PADU (hereafter, dryline confluence). A high degree of correlation existed between the two variables, consistent with a strong role for dryline confluence in determining dryline intensity. Some cases departing from the strong correlation between these variables represent synoptically quiescent drylines whose strength is likely dominated by boundary layer mixing processes.

Composite synoptic analyses were constructed of the upper and lower quartiles of dryline intensity, termed STRONG and WEAK, respectively. STRONG drylines were associated with a short-wave trough in the upper-level westerlies approaching west Texas, an accompanying surface cyclone over eastern New Mexico, and southerly flow over the south-central United States. This synoptic environment was favorable for enhancing the dryline confluence responsible for strengthening the dryline. In contrast, WEAK drylines were associated with an upper-level long-wave ridge over Texas and New Mexico, broad surface cyclogenesis over the southwestern United States, and a weak lee trough—the dryline confluence favorable for dryline intensification was much weaker. A third composite termed NO BOUNDARY was composed of dates with no surface airstream boundary (e.g., front, dryline) in the WTM domain. The NO BOUNDARY composite featured an upper-level long-wave ridge west of Texas and no surface cyclone or lee trough. The results of this study demonstrate the important role that synoptic-scale processes (e.g., surface lee troughs, upper-level short-wave troughs) play in regulating the strength of the dryline. Once such a favorable synoptic pattern occurs, mesoscale and boundary layer processes can lead to further intensification of the dryline.

Full access
Isidora Jankov, Jian-Wen Bao, Paul J. Neiman, Paul J. Schultz, Huiling Yuan, and Allen B. White

Abstract

Numerical prediction of precipitation associated with five cool-season atmospheric river events in northern California was analyzed and compared to observations. The model simulations were performed by using the Advanced Research Weather Research and Forecasting Model (ARW-WRF) with four different microphysical parameterizations. This was done as a part of the 2005–06 field phase of the Hydrometeorological Test Bed project, for which special profilers, soundings, and surface observations were implemented. Using these unique datasets, the meteorology of atmospheric river events was described in terms of dynamical processes and the microphysical structure of the cloud systems that produced most of the surface precipitation. Events were categorized as “bright band” (BB) or “nonbright band” (NBB), the differences being the presence of significant amounts of ice aloft (or lack thereof) and a signature of higher reflectivity collocated with the melting layer produced by frozen precipitating particles descending through the 0°C isotherm.

The model was reasonably successful at predicting the timing of surface fronts, the development and evolution of low-level jets associated with latent heating processes and terrain interaction, and wind flow signatures consistent with deep-layer thermal advection. However, the model showed the tendency to overestimate the duration and intensity of the impinging low-level winds. In general, all model configurations overestimated precipitation, especially in the case of BB events. Nonetheless, large differences in precipitation distribution and cloud structure among model runs using various microphysical parameterization schemes were noted.

Full access
Huiling Yuan, John A. McGinley, Paul J. Schultz, Christopher J. Anderson, and Chungu Lu

Abstract

High-resolution (3 km) time-lagged (initialized every 3 h) multimodel ensembles were produced in support of the Hydrometeorological Testbed (HMT)-West-2006 campaign in northern California, covering the American River basin (ARB). Multiple mesoscale models were used, including the Weather Research and Forecasting (WRF) model, Regional Atmospheric Modeling System (RAMS), and fifth-generation Pennsylvania State University–National Center for Atmospheric Research Mesoscale Model (MM5). Short-range (6 h) quantitative precipitation forecasts (QPFs) and probabilistic QPFs (PQPFs) were compared to the 4-km NCEP stage IV precipitation analyses for archived intensive operation periods (IOPs). The two sets of ensemble runs (operational and rerun forecasts) were examined to evaluate the quality of high-resolution QPFs produced by time-lagged multimodel ensembles and to investigate the impacts of ensemble configurations on forecast skill. Uncertainties in precipitation forecasts were associated with different models, model physics, and initial and boundary conditions. The diabatic initialization by the Local Analysis and Prediction System (LAPS) helped precipitation forecasts, while the selection of microphysics was critical in ensemble design. Probability biases in the ensemble products were addressed by calibrating PQPFs. Using artificial neural network (ANN) and linear regression (LR) methods, the bias correction of PQPFs and a cross-validation procedure were applied to three operational IOPs and four rerun IOPs. Both the ANN and LR methods effectively improved PQPFs, especially for lower thresholds. The LR method outperformed the ANN method in bias correction, in particular for a smaller training data size. More training data (e.g., one-season forecasts) are desirable to test the robustness of both calibration methods.

Full access