Search Results

You are looking at 1 - 10 of 20 items for

  • Author or Editor: Jacob R. Carley x
  • Refine by Access: All Content x
Clear All Modify Search
Eric A. Aligo
,
Brad Ferrier
, and
Jacob R. Carley

Abstract

The Ferrier–Aligo (FA) microphysics scheme has been running operationally in the National Centers for Environmental Prediction (NCEP) North American Mesoscale Forecast System (NAM) since August 2014. It was developed to improve forecasts of deep convection in the NAM contiguous United States (CONUS) nest, and it replaces previous versions of the NAM microphysics. The FA scheme is the culmination of extensive microphysical scheme sensitivity experiments made over nearly a dozen warm- and cool-season severe weather cases, as well as an extensive real-time testing in a full, system-wide developmental version of the NAM. While the FA scheme advects each hydrometeor species separately, it was the mass-weighted rime factor (RF) that allowed rimed ice to be advected to very cold temperatures aloft and improved the vertical structure of deep convection. Rimed ice fall speeds were reduced in order to offset an increase in bias of heavy precipitation as a consequence of the mass-weighted RF advection. The FA scheme also incorporated findings from 3-km model runs using the Thompson scheme, including 1) improved closure assumptions for large precipitating ice that targeted the convective and anvil regions of storms, 2) a new diagnostic calculation of radar reflectivity from rimed ice in association with intense convection, and 3) a variable rain intercept parameter that reduced widespread spurious weak reflectivity from shallow boundary layer clouds and increased stratiform rainfall.

Full access
Donald E. Lippi
,
Jacob R. Carley
, and
Daryl T. Kleist

Abstract

This work describes developments to improve the Doppler radial wind data assimilation scheme used in the National Centers for Environmental Prediction (NCEP) Gridpoint Statistical Interpolation (GSI) data assimilation system with a focus on convection-permitting, 0–18-h forecasts of a heavy precipitation single case study. This work focuses on two aspects: 1) the extension of the radial wind observation operator to include vertical velocity and 2) a refinement of the radial wind super-observation processing. The refinement includes reducing the magnitude of observation smoothing and allowing observations from higher scan angles into the analysis with the intent to improve the assimilation of the radar data for operational, convection-permitting models. The results of this study demonstrate that there is sensitivity to the refinement in super-observation settings. The inclusion of vertical velocity in the observation operator is shown to have a neutral to slightly positive impact on the forecast. Results from this study are suggested to be used as a foundation to prioritize future research into the effective assimilation of radial winds in an operational setting.

Free access
Jeffrey D. Duda
,
Xuguang Wang
,
Yongming Wang
, and
Jacob R. Carley

Abstract

Two methods for assimilating radar reflectivity into deterministic convection-allowing forecasts were compared: an operationally used, computationally less expensive cloud analysis (CA) scheme and a relatively more expensive, but rigorous, ensemble Kalman filter–variational hybrid method (EnVar). These methods were implemented in the Nonhydrostatic Multiscale Model on the B-grid and were tested on 10 cases featuring high-impact deep convective storms and heavy precipitation. A variety of traditional, neighborhood-based, and features-based verification metrics support that the EnVar produced superior free forecasts compared to the CA procedure, with statistically significant differences extending up to 9 h into the forecast. Despite being inferior, the CA scheme was able to provide benefit compared to not assimilating radar reflectivity at all, but limited to the first few forecast hours. While the EnVar is able to partially suppress spurious convection by assimilating 0-dBZ reflectivity observations directly, the CA is not designed to reduce or remove hydrometeors. As a result, the CA struggles more with suppression of spurious convection in the first-guess field, which resulted in high-frequency biases and poor forecast evolution, as illustrated in a few case studies. Additionally, while the EnVar uses flow-dependent ensemble covariances to update hydrometers, thermodynamic, and dynamic variables simultaneously when the reflectivity is assimilated, the CA relies on a radar reflectivity-derived latent heating rate that is applied during a separate digital filter initialization (DFI) procedure to introduce deep convective storms into the model, and the results of CA are shown to be sensitive to the window length used in the DFI.

Full access
Aaron Johnson
,
Xuguang Wang
,
Jacob R. Carley
,
Louis J. Wicker
, and
Christopher Karstens

Abstract

A GSI-based data assimilation (DA) system, including three-dimensional variational assimilation (3DVar) and ensemble Kalman filter (EnKF), is extended to the multiscale assimilation of both meso- and synoptic-scale observation networks and convective-scale radar reflectivity and velocity observations. EnKF and 3DVar are systematically compared in this multiscale context to better understand the impacts of differences between the DA techniques on the analyses at multiple scales and the subsequent convective-scale precipitation forecasts.

Averaged over 10 diverse cases, 8-h precipitation forecasts initialized using GSI-based EnKF are more skillful than those using GSI-based 3DVar, both with and without storm-scale radar DA. The advantage from radar DA persists for ~5 h using EnKF, but only ~1 h using 3DVar.

A case study of an upscale growing MCS is also examined. The better EnKF-initialized forecast is attributed to more accurate analyses of both the mesoscale environment and the storm-scale features. The mesoscale location and structure of a warm front is more accurately analyzed using EnKF than 3DVar. Furthermore, storms in the EnKF multiscale analysis are maintained during the subsequent forecast period. However, storms in the 3DVar multiscale analysis are not maintained and generate excessive cold pools. Therefore, while the EnKF forecast with radar DA remains better than the forecast without radar DA throughout the forecast period, the 3DVar forecast quality is degraded by radar DA after the first hour. Diagnostics revealed that the inferior analysis at mesoscales and storm scales for the 3DVar is primarily attributed to the lack of flow dependence and cross-variable correlation, respectively, in the 3DVar static background error covariance.

Full access
Craig S. Schwartz
,
Jonathan Poterjoy
,
Glen S. Romine
,
David C. Dowell
,
Jacob R. Carley
, and
Jamie Bresch

Abstract

Nine sets of 36-h, 10-member, convection-allowing ensemble (CAE) forecasts with 3-km horizontal grid spacing were produced over the conterminous United States for a 4-week period. These CAEs had identical configurations except for their initial conditions (ICs), which were constructed to isolate CAE forecast sensitivity to resolution of IC perturbations and central initial states about which IC perturbations were centered. The IC perturbations and central initial states were provided by limited-area ensemble Kalman filter (EnKF) analyses with both 15- and 3-km horizontal grid spacings, as well as from NCEP’s Global Forecast System (GFS) and Global Ensemble Forecast System. Given fixed-resolution IC perturbations, reducing horizontal grid spacing of central initial states improved ∼1–12-h precipitation forecasts. Conversely, for constant-resolution central initial states, reducing horizontal grid spacing of IC perturbations led to comparatively smaller short-term forecast improvements or none at all. Overall, all CAEs initially centered on 3-km EnKF mean analyses produced objectively better ∼1–12-h precipitation forecasts than CAEs initially centered on GFS or 15-km EnKF mean analyses regardless of IC perturbation resolution, strongly suggesting it is more important for central initial states to possess fine-scale structures than IC perturbations for short-term CAE forecasting applications, although fine-scale perturbations could potentially be critical for data assimilation purposes. These findings have important implications for future operational CAE forecast systems and suggest CAE IC development efforts focus on producing the best possible high-resolution deterministic analyses that can serve as central initial states for CAEs.

Significance Statement

Ensembles of weather model forecasts are composed of different “members” that, when combined, can produce probabilities that specific weather events will occur. Ensemble forecasts begin from specified atmospheric states, called initial conditions. For ensembles where initial conditions differ across members, the initial conditions can be viewed as a set of small perturbations added to a central state provided by a single model field. Our study suggests it is more important to increase horizontal resolution of the central state than resolution of the perturbations when initializing ensemble forecasts with 3-km horizontal grid spacing. These findings suggest a potential for computational savings and a streamlined process for improving high-resolution ensemble initial conditions.

Restricted access
Craig S. Schwartz
,
Jonathan Poterjoy
,
Jacob R. Carley
,
David C. Dowell
,
Glen S. Romine
, and
Kayo Ide

Abstract

Several limited-area 80-member ensemble Kalman filter (EnKF) data assimilation systems with 15-km horizontal grid spacing were run over a computational domain spanning the conterminous United States (CONUS) for a 4-week period. One EnKF employed continuous cycling, where the prior ensemble was always the 1-h forecast initialized from the previous cycle’s analysis. In contrast, the other EnKFs used a partial cycling procedure, where limited-area states were discarded after 12 or 18 h of self-contained hourly cycles and reinitialized the next day from global model fields. “Blended” states were also constructed by combining large scales from global ensemble initial conditions (ICs) with small scales from limited-area continuously cycling EnKF analyses using a low-pass filter. Both the blended states and EnKF analysis ensembles initialized 36-h, 10-member ensemble forecasts with 3-km horizontal grid spacing. Continuously cycling EnKF analyses initialized ∼1–18-h precipitation forecasts that were comparable to or somewhat better than those with partial cycling EnKF ICs. Conversely, ∼18–36-h forecasts with partial cycling EnKF ICs were comparable to or better than those with unblended continuously cycling EnKF ICs. However, blended ICs yielded ∼18–36-h forecasts that were statistically indistinguishable from those with partial cycling ICs. ICs that more closely resembled global analysis spectral characteristics at wavelengths > 200 km, like partial cycling and blended ICs, were associated with relatively good ∼18–36-h forecasts. Ultimately, findings suggest that EnKFs employing a combination of continuous cycling and blending can potentially replace the partial cycling assimilation systems that currently initialize operational limited-area models over the CONUS without sacrificing forecast quality.

SIGNIFICANCE STATEMENT

Numerical weather prediction models (i.e., weather models) are initialized through a process called data assimilation, which combines real atmospheric observations with a previous short-term weather model forecast using statistical techniques. The overarching data assimilation strategy currently used to initialize operational regional weather models over the United States has several disadvantages that ultimately limit progress toward improving weather model forecasts. Thus, we suggest an alternative data assimilation strategy be adopted to initialize a next-generation, high-resolution (∼3 km) probabilistic forecast system currently being developed. This alternative approach preserves forecast quality while fostering a framework that can accelerate weather model improvements, which in turn will lead to better weather forecasts.

Full access
Jacob R. Carley
,
Benjamin R. J. Schwedler
,
Michael E. Baldwin
,
Robert J. Trapp
,
John Kwiatkowski
,
Jeffrey Logsdon
, and
Steven J. Weiss

Abstract

A feature-specific forecasting method for high-impact weather events that takes advantage of high-resolution numerical weather prediction models and spatial forecast verification methodology is proposed. An application of this method to the prediction of a severe convective storm event is given.

Full access
Laura C. Slivinski
,
Donald E. Lippi
,
Jeffrey S. Whitaker
,
Guoqing Ge
,
Jacob R. Carley
,
Curtis R. Alexander
, and
Gilbert P. Compo

Abstract

The U.S. operational global data assimilation system provides updated analysis and forecast fields every 6 h, which is not frequent enough to handle the rapid error growth associated with hurricanes or other storms. This motivates development of an hourly updating global data assimilation system, but observational data latency can be a barrier. Two methods are presented to overcome this challenge: “catch-up cycles,” in which a 1-hourly system is reinitialized from a 6-hourly system that has assimilated high-latency observations; and “overlapping assimilation windows,” in which the system is updated hourly with new observations valid in the past 3 h. The performance of these methods is assessed in a near-operational setup using the Global Forecast System by comparing forecasts with in situ observations. At short forecast leads, the overlapping windows method performs comparably to the 6-hourly control in a simplified configuration and outperforms the control in a full-input configuration. In the full-input experiment, the catch-up cycle method performs similarly to the 6-hourly control; reinitializing from the 6-hourly control does not appear to provide a significant benefit. Results suggest that the overlapping windows method performs well in part because of the hourly update cadence, but also because hourly cycling systems can make better use of available observations. The impact of the hourly update relative to the 6-hourly update is most significant during the first forecast day, while impacts on longer-range forecasts were found to be mixed and mostly insignificant. Further effort toward an operational global hourly updating system should be pursued.

Full access
Benjamin T. Blake
,
Jacob R. Carley
,
Trevor I. Alcott
,
Isidora Jankov
,
Matthew E. Pyle
,
Sarah E. Perfater
, and
Benjamin Albright

Abstract

Traditional ensemble probabilities are computed based on the number of members that exceed a threshold at a given point divided by the total number of members. This approach has been employed for many years in coarse-resolution models. However, convection-permitting ensembles of less than ~20 members are generally underdispersive, and spatial displacement at the gridpoint scale is often large. These issues have motivated the development of spatial filtering and neighborhood postprocessing methods, such as fractional coverage and neighborhood maximum value, which address this spatial uncertainty. Two different fractional coverage approaches for the generation of gridpoint probabilities were evaluated. The first method expands the traditional point probability calculation to cover a 100-km radius around a given point. The second method applies the idea that a uniform radius is not appropriate when there is strong agreement between members. In such cases, the traditional fractional coverage approach can reduce the probabilities for these potentially well-handled events. Therefore, a variable radius approach has been developed based upon ensemble agreement scale similarity criteria. In this method, the radius size ranges from 10 km for member forecasts that are in good agreement (e.g., lake-effect snow, orographic precipitation, very short-term forecasts, etc.) to 100 km when the members are more dissimilar. Results from the application of this adaptive technique for the calculation of point probabilities for precipitation forecasts are presented based upon several months of objective verification and subjective feedback from the 2017 Flash Flood and Intense Rainfall Experiment.

Full access
Timothy A. Supinie
,
Jun Park
,
Nathan Snook
,
Xiao-Ming Hu
,
Keith A. Brewster
,
Ming Xue
, and
Jacob R. Carley

Abstract

To help inform physics configuration decisions and help design and optimize a multi-physics Rapid Refresh Forecasting System (RRFS) ensemble to be used operationally by the National Weather Service, five FV3-LAM-based convection allowing forecasts were run on 35 cases between October 2020 and March 2021. These forecasts used ∼3-km grid spacing on a CONUS domain with physics configurations including Thompson, NSSL, and Ferrier–Aligo microphysics schemes, Noah, RUC, and NoahMP land surface models, and MYNN-EDMF, K-EDMF, and TKE-EDMF PBL schemes. All forecasts were initialized from the 0000 UTC GFS analysis and run for 84 h. Also, a subset of 8 cases were run with 15 combinations of physics options, also including the Morrison–Gettelman microphysics and Shin–Hong PBL schemes, to help attribute behaviors to individual schemes and isolate the main contributors of forecast errors. Evaluations of both sets of forecasts find that the CONUS-wide 24-h precipitation > 1 mm is positively biased across all five forecasts. NSSL microphysics displays a low bias in QPF along the Gulf Coast. Analyses show that it produces smaller raindrops prone to evaporation. Additionally, TKE-EDMF PBL in combination with Thompson microphysics displays a positive bias in precipitation over the Great Lakes and in the ocean near Florida due to higher latent heat fluxes calculated over water. Furthermore, the K-EDMF PBL scheme produces temperature errors that result in a negative bias in snowfall over the southern Mountain West. Finally, recommendations for which physics schemes to use in future suites and the RRFS ensemble are discussed.

Restricted access