Search Results

You are looking at 1 - 10 of 16 items for

  • Author or Editor: Isidora Jankov x
  • All content x
Clear All Modify Search
Tomislava Vukicevic, Isidora Jankov, and John McGinley

Abstract

In the current study, a technique that offers a way to evaluate ensemble forecast uncertainties produced either by initial conditions or different model versions, or both, is presented. The technique consists of first diagnosing the performance of the forecast ensemble and then optimizing the ensemble forecast using results of the diagnosis. The technique is based on the explicit evaluation of probabilities that are associated with the Gaussian stochastic representation of the weather analysis and forecast. It combines an ensemble technique for evaluating the analysis error covariance and the standard Monte Carlo approach for computing samples from a known Gaussian distribution. The technique was demonstrated in a tutorial manner on two relatively simple examples to illustrate the impact of ensemble characteristics including ensemble size, various observation strategies, and configurations including different model versions and varying initial conditions. In addition, the authors assessed improvements in the consensus forecasts gained by optimal weighting of the ensemble members based on time-varying, prior-probabilistic skill measures. The results with different observation configurations indicate that, as observations become denser, there is a need for larger-sized ensembles and/or more accuracy among individual members for the ensemble forecast to exhibit prediction skill. The main conclusions relative to ensembles built up with different physics configurations were, first, that almost all members typically exhibited some skill at some point in the model run, suggesting that all should be retained to acquire the best consensus forecast; and, second, that the normalized probability metric can be used to determine what sets of weights or physics configurations are performing best. A comparison of forecasts derived from a simple ensemble mean to forecasts from a mean developed from variably weighting the ensemble members based on prior performance by the probabilistic measure showed that the latter had substantially reduced mean absolute error. The study also indicates that a weighting scheme that utilized more prior cycles showed additional reduction in forecast error.

Full access
Isidora Jankov and William A. Gallus Jr.

Abstract

The large-scale forcing associated with 20 mesoscale convective system (MCS) events has been evaluated to determine how the magnitude of that forcing influences the rainfall forecasts made with a 10-km grid spacing version of the Eta Model. Different convective parameterizations and initialization modifications were used to simulate these Upper Midwest events. Cases were simulated using both the Betts–Miller–Janjić (BMJ) and the Kain–Fritsch (KF) convective parameterizations, and three different techniques were used to improve the initialization of mesoscale features important to later MCS evolution. These techniques included a cold pool initialization, vertical assimilation of surface mesoscale observations, and an adjustment to initialized relative humidity based on radar echo coverage. As an additional aspect in this work, a morphology analysis of the 20 MCSs was included.

Results suggest that the model using both schemes performs better when net large-scale forcing is strong, which typically is the case when a cold front moves across the domain. When net forcing is weak, which is often the case in midsummer situations north of a warm or stationary front, both versions of the model perform poorly. Runs with the BMJ scheme seem to be more affected by the magnitude of surface frontogenesis than the KF runs. Runs with the KF scheme are more sensitive to the CAPE amount than the BMJ runs. A fairly well-defined split in morphology was observed, with squall lines having trailing stratiform regions likely in scenarios associated with higher equitable threat scores (ETSs) and nonlinear convective clusters strongly dominating the more poorly forecast weakly forced events.

Full access
William A. Gallus Jr., James Correia Jr., and Isidora Jankov

Abstract

Warm season convective system rainfall forecasts remain a particularly difficult forecast challenge. For these events, it is possible that ensemble forecasts would provide helpful information unavailable in a single deterministic forecast. In this study, an intense derecho event accompanied by a well-organized band of heavy rainfall is used to show that for some situations, the predictability of rainfall even within a 12–24-h period is so low that a wide range of simulations using different models, different physical parameterizations, and different initial conditions all fail to provide even a small signal that the event will occur. The failure of a wide range of models and parameterizations to depict the event might suggest inadequate representation of the initial conditions. However, a range of different initial conditions also failed to lead to a well-simulated event, suggesting that some events are unlikely to be predictable with the current observational network, and ensemble guidance for such cases may provide limited additional information useful to a forecaster.

Full access
Isidora Jankov, Jian-Wen Bao, Paul J. Neiman, Paul J. Schultz, Huiling Yuan, and Allen B. White

Abstract

Numerical prediction of precipitation associated with five cool-season atmospheric river events in northern California was analyzed and compared to observations. The model simulations were performed by using the Advanced Research Weather Research and Forecasting Model (ARW-WRF) with four different microphysical parameterizations. This was done as a part of the 2005–06 field phase of the Hydrometeorological Test Bed project, for which special profilers, soundings, and surface observations were implemented. Using these unique datasets, the meteorology of atmospheric river events was described in terms of dynamical processes and the microphysical structure of the cloud systems that produced most of the surface precipitation. Events were categorized as “bright band” (BB) or “nonbright band” (NBB), the differences being the presence of significant amounts of ice aloft (or lack thereof) and a signature of higher reflectivity collocated with the melting layer produced by frozen precipitating particles descending through the 0°C isotherm.

The model was reasonably successful at predicting the timing of surface fronts, the development and evolution of low-level jets associated with latent heating processes and terrain interaction, and wind flow signatures consistent with deep-layer thermal advection. However, the model showed the tendency to overestimate the duration and intensity of the impinging low-level winds. In general, all model configurations overestimated precipitation, especially in the case of BB events. Nonetheless, large differences in precipitation distribution and cloud structure among model runs using various microphysical parameterization schemes were noted.

Full access
Isidora Jankov, William A. Gallus Jr., Moti Segal, Brent Shaw, and Steven E. Koch

Abstract

In recent years, a mixed-physics ensemble approach has been investigated as a method to better predict mesoscale convective system (MCS) rainfall. For both mixed-physics ensemble design and interpretation, knowledge of the general impact of various physical schemes and their interactions on warm season MCS rainfall forecasts would be useful. Adopting the newly emerging Weather Research and Forecasting (WRF) model for this purpose would further emphasize such benefits. To pursue this goal, a matrix of 18 WRF model configurations, created using different physical scheme combinations, was run with 12-km grid spacing for eight International H2O Project (IHOP) MCS cases. For each case, three different treatments of convection, three different microphysical schemes, and two different planetary boundary layer schemes were used. Sensitivity to physics changes was determined using the correspondence ratio and the squared correlation coefficient. The factor separation method was also used to quantify in detail the impacts of the variation of two different physical schemes and their interaction on the simulated rainfall.

Skill score measures averaged over all eight cases for all 18 configurations indicated that no one configuration was obviously best at all times and thresholds. The greatest variability in forecasts was found to come from changes in the choice of convective scheme, although notable impacts also occurred from changes in the microphysics and planetary boundary layer (PBL) schemes. Specifically, changes in convective treatment notably impacted the forecast of system average rain rate, while forecasts of total domain rain volume were influenced by choices of microphysics and convective treatment. The impact of interactions (synergy) of different physical schemes, although occasionally of comparable magnitude to the impacts from changing one scheme alone (compared to a control run), varied greatly among cases and over time, and was typically not statistically significant.

Full access
Isidora Jankov, Paul J. Schultz, Christopher J. Anderson, and Steven E. Koch

Abstract

The most significant precipitation events in California occur during the winter and are often related to synoptic-scale storms from the Pacific Ocean. Because of the terrain characteristics and the fact that the urban and infrastructural expansion is concentrated in lower elevation areas of the California Central Valley, a high risk of flooding is usually associated with these events. In the present study, the area of interest was the American River basin (ARB). The main focus of the present study was to investigate methods for Quantitative Precipitation Forecast (QPF) improvement by estimating the impact that various microphysical schemes, planetary boundary layer (PBL) schemes, and initialization methods have on cold season precipitation, primarily orographically induced. For this purpose, 3-km grid spacing Weather Research and Forecasting (WRF) model simulations of four Hydrometeorological Test bed (HMT) events were used. For each event, four different microphysical schemes and two different PBL schemes were used. All runs were initialized with both a diabatic Local Analysis and Prediction System (LAPS) “hot” start and 40-km eta analyses.

To quantify the impact of physical schemes, their interactions, and initial conditions upon simulated rain volume, the factor separation methodology was used. The results showed that simulated rain volume was particularly affected by changes in microphysical schemes for both initializations. When the initialization was changed from the LAPS to the eta analysis, the change in the PBL scheme and corresponding synergistic terms (which corresponded to the interactions between different microphysical and PBL schemes) resulted in a statistically significant impact on rain volume. In addition, by combining model runs based on the knowledge about their impact on simulated rain volume obtained through the factor separation methodology, the bias in simulated rain volume was reduced.

Full access
Isidora Jankov, William A. Gallus Jr., Moti Segal, and Steven E. Koch

Abstract

To assist in optimizing a mixed-physics ensemble for warm season mesoscale convective system rainfall forecasting, the impact of various physical schemes as well as their interactions on rainfall when different initializations were used has been investigated. For this purpose, high-resolution Weather Research and Forecasting (WRF) model simulations of eight International H2O Project events were performed. For each case, three different treatments of convection, three different microphysical schemes, and two different planetary boundary layer (PBL) schemes were used. All cases were initialized with both Local Analyses and Prediction System (LAPS) “hot” start analyses and 40-km Eta Model analyses. To evaluate the impacts of the variation of two different physical schemes and their interaction on the simulated rainfall under the two different initial conditions, the factor separation method was used. The sensitivity to the use of various physical schemes and their interactions was found to be dependent on the initialization dataset. Runs initialized with Eta analyses appeared to be influenced by the use of the Betts–Miller–Janjić scheme in that model’s assimilation system, which tended to reduce the WRF’s sensitivity to changes in the microphysical scheme compared with that present when LAPS analyses were used for initialization. In addition, differences in initialized thermodynamics resulted in changes in sensitivity to PBL and convective schemes. With both initialization datasets, the greatest sensitivity to the simulated rain rate was due to changes in the convective scheme. However, for rain volume, substantial sensitivity was present due to changes in both the physical parameterizations and the initial datasets.

Full access
Isidora Jankov, Jeffrey Beck, Jamie Wolff, Michelle Harrold, Joseph B. Olson, Tatiana Smirnova, Curtis Alexander, and Judith Berner

Abstract

A stochastically perturbed parameterization (SPP) approach that spatially and temporally perturbs parameters and variables in the Mellor–Yamada–Nakanishi–Niino planetary boundary layer scheme (PBL) and introduces initialization perturbations to soil moisture in the Rapid Update Cycle land surface model was developed within the High-Resolution Rapid Refresh convection-allowing ensemble. This work is a follow-up study to a work performed using the Rapid Refresh (RAP)-based ensemble. In the present study, the SPP approach was used to target the performance of precipitation and low-level variables (e.g., 2-m temperature and dewpoint, and 10-m wind). The stochastic kinetic energy backscatter scheme and the stochastic perturbation of physics tendencies scheme were combined with the SPP approach and applied to the PBL to target upper-level variable performance (e.g., improved skill and reliability). The three stochastic experiments (SPP applied to PBL only, SPP applied to PBL combined with SKEB and SPPT, and stochastically perturbed soil moisture initial conditions) were compared to a mixed-physics ensemble. The results showed a positive impact from initial condition soil moisture perturbations on precipitation forecasts; however, it resulted in an increase in 2-m dewpoint RMSE. The experiment with perturbed parameters within the PBL showed an improvement in low-level wind forecasts for some verification metrics. The experiment that combined the three stochastic approaches together exhibited improved RMSE and spread for upper-level variables. Our study demonstrated that, by using the SPP approach, forecasts of specific variables can be improved. Also, the results showed that using a single-physics suite ensemble with stochastic methods is potentially an attractive alternative to using multiphysics for convection allowing ensembles.

Full access
Evan A. Kalina, Isidora Jankov, Trevor Alcott, Joseph Olson, Jeffrey Beck, Judith Berner, David Dowell, and Curtis Alexander

Abstract

The High-Resolution Rapid Refresh Ensemble (HRRRE) is a 36-member ensemble analysis system with nine forecast members that utilizes the Advanced Research Weather Research and Forecasting (ARW-WRF) dynamic core and the physics suite from the operational Rapid Refresh/High-Resolution Rapid Refresh deterministic modeling system. A goal of HRRRE development is a system with sufficient spread amongst members, comparable in magnitude to the random error in the ensemble mean, to represent the range of possible future atmospheric states. HRRRE member diversity has traditionally been obtained by perturbing the initial and lateral boundary conditions of each member, but recent development has focused on implementing stochastic approaches in HRRRE to generate additional spread. These techniques were tested in retrospective experiments and in the May 2019 Hazardous Weather Testbed Spring Experiment (HWT-SE). Results show a 6–25% increase in the ensemble spread in 2-m temperature, 2-m mixing ratio, and 10-m wind speed when stochastic parameter perturbations are used in HRRRE (HRRRE-SPP). Case studies from HWT-SE demonstrate that HRRRE-SPP performed similar to or better than the operational High-Resolution Ensemble Forecast system version 2 (HREFv2) and the non-stochastic HRRRE. However, subjective evaluations provided by HWT-SE forecasters indicated that overall, HRRRE-SPP predicted lower probabilities of severe weather (using updraft helicity as a proxy) compared to HREFv2. A statistical analysis of the performance of HRRRE-SPP and HREFv2 from the 2019 summer convective season supports this claim, but also demonstrates that the two systems have similar reliability for prediction of severe weather using updraft helicity.

Open access
Benjamin T. Blake, Jacob R. Carley, Trevor I. Alcott, Isidora Jankov, Matthew E. Pyle, Sarah E. Perfater, and Benjamin Albright

Abstract

Traditional ensemble probabilities are computed based on the number of members that exceed a threshold at a given point divided by the total number of members. This approach has been employed for many years in coarse-resolution models. However, convection-permitting ensembles of less than ~20 members are generally underdispersive, and spatial displacement at the gridpoint scale is often large. These issues have motivated the development of spatial filtering and neighborhood postprocessing methods, such as fractional coverage and neighborhood maximum value, which address this spatial uncertainty. Two different fractional coverage approaches for the generation of gridpoint probabilities were evaluated. The first method expands the traditional point probability calculation to cover a 100-km radius around a given point. The second method applies the idea that a uniform radius is not appropriate when there is strong agreement between members. In such cases, the traditional fractional coverage approach can reduce the probabilities for these potentially well-handled events. Therefore, a variable radius approach has been developed based upon ensemble agreement scale similarity criteria. In this method, the radius size ranges from 10 km for member forecasts that are in good agreement (e.g., lake-effect snow, orographic precipitation, very short-term forecasts, etc.) to 100 km when the members are more dissimilar. Results from the application of this adaptive technique for the calculation of point probabilities for precipitation forecasts are presented based upon several months of objective verification and subjective feedback from the 2017 Flash Flood and Intense Rainfall Experiment.

Full access