Search Results

You are looking at 1 - 7 of 7 items for

  • Author or Editor: John V. Cortinas Jr. x
  • Refine by Access: All Content x
Clear All Modify Search
Chris C. Robbins and John V. Cortinas Jr.

Abstract

Local and synoptic conditions associated with freezing-rain events in the continental United States, as well as the temporal and spatial variability of these conditions, have been documented for the period 1976–90. It had been postulated that the characteristics of the thermodynamic stratification observed during freezing rain would be similar regardless of geographical location. However, through hypothesis testing, it was found that some interregional variability exists in the magnitude of various sounding parameters that the authors felt characterized the important aspects of the thermodynamic profile of freezing-rain environments. This variability seems to be related not only to local effects resulting from terrain variations and nearby water sources, but also from regional differences in synoptic-scale atmospheric environments favorable for freezing rain.

These results suggest that freezing-rain forecast techniques, which rely on critical parameters derived for specific geographical locations, may not be applicable if applied elsewhere. Therefore, forecasters evaluating the possibility of freezing rain over synoptic-scale areas should not expect one variable that characterizes a sounding to be an accurate indicator of freezing rain across the entire region. Algorithms that evaluate the entire thermodynamic profile and consider the effect of this profile on frozen and freezing precipitation may provide forecasters with a quick and more accurate method of evaluating the potential for freezing rain than traditional forecast techniques, such as partial thickness.

Full access
John V. Cortinas Jr. and David J. Stensrud

Abstract

A severe weather outbreak that occurred on 21–23 November 1992 in the southern United States is used to illustrate how an understanding of model parameterization schemes can help in the evaluation and utilization of mesoscale model output. Results from a mesoscale model simulation show that although the model accurately simulated many of the observed mesoscale features, there are several aspects of the model simulation that are not perfect. Through an understanding of the model parameterization schemes, these model imperfections are analyzed and found to have little effect on the overall skill of the model forecast in this case.

Mesoscale model output also is used to provide guidance to evaluate the severe weather threat. By using the model output to produce hourly calculations of convective available potential energy (CAPE) and storm relative environmental helicity (SREH), it is found that regions with positive CAPE, SREH greater than 150 m2 s−2, and model-produced convective rainfall correspond well with areas in which supercell thunderstorms developed. In addition, these parameters are highly variable in both space and time, accentuating the need for continuous monitoring in an operational environment and frequent model output times.

Full access
David J. Stensrud, John V. Cortinas Jr., and Harold E. Brooks

Abstract

The ability to discriminate between tornadic and nontornadic thunderstorms is investigated using a mesoscale model. Nine severe weather events are simulated: four events are tornadic supercell thunderstorm outbreaks that occur in conjunction with strong large-scale forcing for upward motion, three events are bow-echo outbreaks that also occur in conjunction with strong large-scale forcing for upward motion, and two are isolated tornadic supercell thunderstorms that occur under much weaker large-scale forcing. Examination of the mesoscale model simulations suggests that it is possible to discriminate between tornadic and nontornadic thunderstorms by using the locations of model-produced convective activity and values of convective available potential energy to highlight regions of likely thunderstorm development, and then using the values of storm-relative environmental helicity (SREH) and bulk Richardson number shear (BRNSHR) to indicate whether or not tornadic supercell thunderstorms are likely. Values of SREH greater than 100 m2 s−2 indicate a likelihood that any storms that develop will have a midlevel mesocyclone, values of BRNSHR between 40 and 100 m2 s−2 suggest that low-level mesocyclogenesis is likely, and values of BRNSHR less than 40 m2 s−2 suggest that the thunderstorms will be dominated by outflow. By combining the storm characteristics suggested by these parameters, it is possible to use mesoscale model output to infer the dominant mode of severe convection.

Full access
John V. Cortinas Jr., Ben C. Bernstein, Christopher C. Robbins, and J. Walter Strapp

Abstract

A comprehensive analysis of freezing rain, freezing drizzle, and ice pellets was conducted using data from surface observations across the United States and Canada. This study complements other studies of freezing precipitation in the United States and Canada, and provides additional information about the temporal characteristics of the distribution. In particular, it was found that during this period 1) spatial variability in the annual frequency of freezing precipitation and ice pellets is large across the United States and Canada, and these precipitation types occur most frequently across the central and eastern portions of the United States and Canada, much of Alaska, and the northern shores of Canada; 2) freezing precipitation and ice pellets occur most often from December to March, except in northern Canada and Alaska where it occurs during the warm season, as well; 3) freezing rain and freezing drizzle appear to be influenced by the diurnal solar cycle; 4) freezing precipitation is often short lived; 5) most freezing rain and freezing drizzle are not mixed with other precipitation types, whereas most reports of ice pellets included other types of precipitation; 6) freezing precipitation and ice pellets occur most frequently with a surface (2 m) temperature slightly less than 0°C; and 7) following most freezing rain events, the surface temperature remains at or below freezing for up to 10 h, and for up to 25 h for freezing drizzle.

Full access
David M. Schultz, John V. Cortinas Jr., and Charles A. Doswell III

Abstract

Wetzel and Martin present an ingredients-based methodology for forecasting winter season precipitation. Although they are to be commended for offering a framework for winter-weather forecasting, disagreements arise with some of their specific recommendations. In particular, this paper clarifies the general philosophy of ingredients-based methodologies and shows how the methodology presented by Wetzel and Martin has the potential to be misinterpreted by their choice of diagnostics (including their PVQ and the so-called traditional techniques) and their use of cloud microphysics. Given that winter-weather forecasts are imperfect at present, this paper advocates continued exploration of scientifically based forecasting techniques.

Full access
Matthew S. Wandishin, Michael E. Baldwin, Steven L. Mullen, and John V. Cortinas Jr.

Abstract

Short-range ensemble forecasting is extended to a critical winter weather problem: forecasting precipitation type. Forecast soundings from the operational NCEP Short-Range Ensemble Forecast system are combined with five precipitation-type algorithms to produce probabilistic forecasts from January through March 2002. Thus the ensemble combines model diversity, initial condition diversity, and postprocessing algorithm diversity. All verification numbers are conditioned on both the ensemble and observations recording some form of precipitation. This separates the forecast of type from the yes–no precipitation forecast.

The ensemble is very skillful in forecasting rain and snow but it is only moderately skillful for freezing rain and unskillful for ice pellets. However, even for the unskillful forecasts the ensemble shows some ability to discriminate between the different precipitation types and thus provides some positive value to forecast users. Algorithm diversity is shown to be as important as initial condition diversity in terms of forecast quality, although neither has as big an impact as model diversity. The algorithms have their individual strengths and weaknesses, but no algorithm is clearly better or worse than the others overall.

Full access
Paul J. Roebber, Sara L. Bruening, David M. Schultz, and John V. Cortinas Jr.

Abstract

Current prediction of snowfall amounts is accomplished either by using empirical techniques or by using a standard modification of liquid equivalent precipitation such as the 10-to-1 rule. This rule, which supposes that the depth of the snowfall is 10 times the liquid equivalent (a snow ratio of 10:1, reflecting an assumed snow density of 100 kg m−3), is a particularly popular technique with operational forecasters, although it dates from a limited nineteenth-century study. Unfortunately, measurements of freshly fallen snow indicate that the snow ratio can vary from on the order of 3:1 to (occasionally) 100:1. Improving quantitative snowfall forecasts requires, in addition to solutions to the significant challenge of forecasting liquid precipitation amounts, a more robust method for forecasting the density of snow. A review of the microphysical literature reveals that many factors may contribute to snow density, including in-cloud (crystal habit and size, the degree of riming and aggregation of the snowflake), subcloud (melting and sublimation), and surface processes (compaction and snowpack metamorphism). Despite this complexity, the paper explores the sufficiency of surface and radiosonde data for the classification of snowfall density. A principal component analysis isolates seven factors that influence the snow ratio: solar radiation (month), low- to midlevel temperature, mid- to upper-level temperature, low- to midlevel relative humidity, midlevel relative humidity, upper-level relative humidity, and external compaction (surface wind speed and liquid equivalent). A 10-member ensemble of artificial neural networks is employed to explore the capability of determining snow ratio in one of three classes: heavy (1:1 < ratio < 9:1), average (9:1 ≤ ratio ≤ 15:1), and light (ratio > 15:1). The ensemble correctly diagnoses 60.4% of the cases, which is a substantial improvement over the 41.7% correct using the sample climatology, 45.0% correct using the 10-to-1 ratio, and 51.7% correct using the National Weather Service “new snowfall to estimated meltwater conversion” table. A key skill measure, the Heidke skill score, attains values of 0.34–0.42 using the ensemble technique, with increases of 75%–183% over the next most skillful approach. The critical success index shows that the ensemble technique provides the best information for all three snow-ratio classes. The most critical inputs to the ensemble are related to the month, temperature, and external compaction. Withholding relative humidity information from the neural networks leads to a loss of performance of at least 5% in percent correct, suggesting that these inputs are useful, if nonessential. Examples of pairs of cases highlight the influence that these factors have in determining snow ratio. Given the improvement over presently used techniques for diagnosing snow ratio, this study indicates that the neural network approach can lead to advances in forecasting snowfall depth.

Full access