Browse

You are looking at 101 - 110 of 2,841 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Christopher J. Schultz, Roger E. Allen, Kelley M. Murphy, Benjamin S. Herzog, Stephanie A. Weiss, and Jacquelyn S. Ringhausen

Abstract

Infrequent lightning flashes occurring outside of surface precipitation pose challenges to Impact-Based Decision Support Services (IDSS) for outdoor activities. This paper examines the remote sensing observations from an event on 20 August 2019 where multiple cloud-to-ground flashes occurred over 10 km outside surface precipitation (lowest radar tilt reflectivity < 10 dBZ and no evidence of surface precipitation) in a trailing stratiform region of a mesoscale convective system. The goal is to demonstrate the fusion of radar with multiple lightning observations and a lightning risk model to demonstrate how reflectivity and differential reflectivity combined provided the best indicator for the potential of lightning where all of the other lightning safety methods failed. A total of 13 lightning flashes were observed by the Geostationary Lightning Mapper (GLM) within the trailing stratiform region between 2100 and 2300 UTC. The average size of the 13 lightning flashes was 3184 km2, with an average total optical energy of 7734 fJ. A total of 75 NLDN flash locations were coincident with the 13 GLM flashes, resulting in an average of 5.8 NLDN flashes [in-cloud (IC) and cloud-to-ground (CG)] per GLM flash. In total, five of the GLM flashes contained at least one positive cloud-to-ground flash (+CG) flash identified by the NLDN, with peak amplitudes ranging between 66 and 136 kA. All eight CG flashes identified by the NLDN were located more than 10 km outside surface precipitation. The only indication of the potential of these infrequently large flashes was the presence of depolarization streaks in differential reflectivity (Z DR) and enhanced reflectivity near the melting layer.

Restricted access
Jonathan Poterjoy, Ghassan J. Alaka Jr., and Henry R. Winterbottom

Abstract

Limited-area numerical weather prediction models currently run operationally in the United States and follow a “partially cycled” schedule, where sequential data assimilation is periodically interrupted by replacing model states with solutions interpolated from a global model. While this strategy helps overcome several practical challenges associated with real-time regional forecasting, it is no substitute for a robust sequential data assimilation approach for research-to-operations purposes. Partial cycling can mask systematic errors in weather models, data assimilation systems, and data preprocessing techniques, since it introduces information from a different prediction system. It also adds extra heuristics to the model initialization steps outside the general Bayesian filtering framework from which data assimilation methods are derived. This study uses a research-oriented modeling system, which is self-contained in the operational Hurricane Weather Research and Forecasting (HWRF) Model package, to illustrate why next-generation modeling systems should prioritize sequential data assimilation at early stages of development. This framework permits the rigorous examination of all model system components—in a manner that has never been done for the HWRF Model. Examples presented in this manuscript show how sequential data assimilation capabilities can accelerate model advancements and increase academic involvement in operational forecasting systems at a time when the United States is developing a new hurricane forecasting system.

Restricted access
Gregory J. Stumpf and Alan E. Gerard

Abstract

Threats-in-Motion (TIM) is a warning generation approach that would enable the NWS to advance severe thunderstorm and tornado warnings from the current static polygon system to continuously updating polygons that move forward with a storm. This concept is proposed as a first stage for implementation of the Forecasting a Continuum of Environmental Threats (FACETs) paradigm, which eventually aims to deliver rapidly updating probabilistic hazard information alongside NWS warnings, watches, and other products. With TIM, a warning polygon is attached to the threat and moves forward along with it. This provides more uniform, or equitable, lead time for all locations downstream of the event. When forecaster workload is high, storms remain continually tracked and warned. TIM mitigates gaps in warning coverage and improves the handling of storm motion changes. In addition, warnings are automatically cleared from locations where the threat has passed. This all results in greater average lead times and lower average departure times than current NWS warnings, with little to no impact to average false alarm time. This is particularly noteworthy for storms expected to live longer than the average warning duration (30 or 45 min) such as long-tracked supercells that are more prevalent during significant tornado outbreaks.

Open access
William A. Komaromi, Patrick A. Reinecke, James D. Doyle, and Jonathan R. Moskaitis

Abstract

The 11-member Coupled Ocean–Atmosphere Mesoscale Prediction System-Tropical Cyclones (COAMPS-TC) ensemble has been developed by the Naval Research Laboratory (NRL) to produce probabilistic forecasts of tropical cyclone (TC) track, intensity and structure. All members run with a storm-following inner grid at convection-permitting 4-km horizontal resolution. The COAMPS-TC ensemble is constructed via a combination of perturbations to initial and boundary conditions, the initial vortex, and model physics to account for a variety of different sources of uncertainty that affect track and intensity forecasts. Unlike global model ensembles, which do a reasonable job capturing track uncertainty but not intensity, mesoscale ensembles such as the COAMPS-TC ensemble are necessary to provide a realistic intensity forecast spectrum. The initial and boundary condition perturbations are responsible for generating the majority of track spread at all lead times, as well as the intensity spread from 36 to 120 h. The vortex and physics perturbations are necessary to produce meaningful spread in the intensity prediction from 0 to 36 h. In a large sample of forecasts from 2014 to 2017, the ensemble-mean track and intensity forecast is superior to the unperturbed control forecast at all lead times, demonstrating a clear advantage to running an ensemble versus a deterministic forecast. The spread–skill relationship of the ensemble is also examined, and is found to be very well calibrated for track, but is underdispersive for intensity. Using a mixture of lateral boundary conditions derived from different global models is found to improve upon the spread–skill score for intensity, but it is hypothesized that additional physics perturbations will be necessary to achieve realistic ensemble spread.

Restricted access
Weiguo Wang, Bin Liu, Lin Zhu, Zhan Zhang, Avichal Mehra, and Vijay Tallapragada

Abstract

A new physically based horizontal mixing-length formulation is introduced and evaluated in the Hurricane Weather and Research Forecasting (HWRF) Model. Recent studies have shown that the structure and intensity of tropical cyclones (TCs) simulated by numerical models are sensitive to horizontal mixing length in the parameterization of horizontal diffusion. Currently, many numerical models including the operational HWRF Model formulate the horizontal mixing length as a fixed fraction of grid spacing or a constant value, which is not realistic. To improve the representation of the horizontal diffusion process, the new formulation relates the horizontal mixing length to local wind and its horizontal gradients. The resulting horizontal mixing length and diffusivity are much closer to those derived from field measurements. To understand the impact of different mixing-length formulations, we analyze the evolutions of an idealized TC simulated by the HWRF Model with the new formulation and with the current formulation (i.e., constant values) of horizontal mixing length. In two real-case tests, the HWRF Model with the new formulation produces the intensity and track forecasts of Hurricanes Harvey (2017) and Lane (2018) that are much closer to observations. Retrospective runs of hundreds of forecast cycles of multiple hurricanes show that the mean errors in intensity and track simulated by HWRF with the new formulation can be reduced approximately by 10%.

Restricted access
K. J. Tory and J. D. Kepert

Abstract

Pyrocumulonimbus (pyroCb) clouds are difficult to predict and can produce extreme and unexpected wildfire behavior that can be very hazardous to fire crews. Many forecasters modify conventional thunderstorm diagnostics to predict pyroCb potential, by adding temperature (Δθ) and moisture increments (Δq) to represent smoke plume thermodynamics near the expected plume condensation level. However, estimating these Δθ and Δq increments is a highly subjective process that requires expert knowledge of all factors that might influence future fire size and intensity. In this paper, instead of trying to anticipate these Δθ and Δq increments for a particular fire, the minimum firepower required to generate pyroCb for a given atmospheric environment is considered. This concept, termed the pyroCb firepower threshold (PFT) requires only atmospheric information, removing the need for subjective estimates of the fire contribution. A simple approach to calculating PFT is presented that incorporates only basic plume-rise physics, yielding an analytic solution that offers important insight into plume behavior and pyroCb formation. Minimum increments of Δθ and Δq required for deep, moist convection, plus a minimum cloud-base height (z fc), are diagnosed on a thermodynamic diagram. Briggs’s plume rise equations are used to convert Δθ, z fc, and a mean horizontal wind speed U to a measure of the PFT: the minimum heat flux entering the base of the plume. This PFT is proportional to the product of U, Δθ, and the square of z fc. Plume behavior insights provided by the Briggs’s equations are discussed, and a selection of PFT examples presented.

Open access
Kevin Birk, Eric Lenning, Kevin Donofrio, and Matthew T. Friedlein

Abstract

Using vertical temperature profiles obtained from upper-air observations or numerical weather prediction models, the Bourgouin technique calculates areas of positive melting energy and negative refreezing energy for determining precipitation type. Energies are proportional to the product of the mean temperature of a layer and its depth. Layers warmer than 0°C consist of positive energy; those colder than 0°C consist of negative energy. Sufficient melting or freezing energy in a layer can produce a phase change in a falling hydrometeor. The Bourgouin technique utilizes these energies to determine the likelihood of rain (RA) versus snow (SN) given a surface-based melting layer and ice pellets (PL) versus freezing rain (FZRA) or RA given an elevated melting layer. The Bourgouin approach was developed from a relatively small dataset but has been widely utilized by operational forecasters and in postprocessing of NWP output. Recent analysis with a larger dataset suggests ways to improve the original technique, especially when discriminating PL from FZRA or RA. This and several other issues are addressed by a modified version of the Bourgouin technique described in this article. Additional enhancements include use of the wet-bulb profile rather than temperature, a check for heterogeneous ice nucleation, and output that includes probabilities of four different weather types (RA, SN, FZRA, PL) rather than the single most likely type. Together these revisions result in improved performance and provide a more viable and valuable tool for precipitation-type forecasts. Several National Weather Service forecast offices have successfully utilized the revised tool in recent winters.

Restricted access
Kelsey B. Thompson, Monte G. Bateman, and John R. Mecikalski

Abstract

A total of 13 ocean-based wind events from 2018, detected by buoys and Coastal-Marine Automated Network (C-MAN) stations, were analyzed using 1-min mesoscale sector Advanced Baseline Imager (ABI) cloud top brightness temperature (CTTB) data, as well as 1-min Geostationary Lightning Mapper (GLM) lightning data. The ABI and GLM instruments are located on the Geostationary Operational Environmental Satellite-16 (GOES-16) satellite. An oceanic wind event was defined as a buoy or C-MAN station-recorded peak wind gust of at least 14 m s−1, associated with a convective storm. The wind gust was required to exceed the wind speed by at least 4 m s−1 at the time of the event, but not exceed the corresponding wind speed by at least 4 m s−1 for more than 30 min. This study hypothesized that prior to a wind event, there should be unique signatures in ABI CTTB and GLM lightning datasets. The presumption was that the minimum CTTB and maximum flash rate should occur near the same time and prior to the event. The minimum CTTB occurred an average of 10.5 min and a median of 7 min prior to events, with a range from 29 min prior to 1 min after the event. Changes in CTTB were often subtle. A maximum flash rate occurred within 5 min of the minimum CTTB for 11 of the 12 events with lightning and did not exceed 11 flashes per minute for 9 of the 12 events with lightning. Operational weather forecasters might use CTTB and lightning trends to help identify storms capable of producing significant oceanic wind events.

Restricted access
Jason M. Cordeira and F. Martin Ralph

Abstract

The ability to provide accurate forecasts and improve situational awareness of atmospheric rivers (ARs) is key to impact-based decision support services and applications such as forecast-informed reservoir operations. The purpose of this study is to quantify the cool-season water year skill for 2017–20 of the NCEP Global Ensemble Forecast System forecasts of integrated water vapor transport along the U.S. West Coast commonly observed during landfalling ARs. This skill is summarized for ensemble probability-over-threshold forecasts of integrated water vapor transport magnitudes ≥ 250 kg m−1 s−1 (referred to as P 250). The P 250 forecasts near North-Coastal California at 38°N, 123°W were reliable and successful at lead times of ~8–9 days with an average success ratio > 0.5 for P 250 forecasts ≥ 50% at lead times of 8 days and Brier skill scores > 0.1 at a lead time of 8–9 days. Skill and accuracy also varied as a function of latitude and event characteristics. The highest (lowest) success ratios and probability of detection values for P 250 forecasts ≥ 50% occurred on average across Northern California and Oregon (Southern California), whereas the average probability of detection of more intense and longer duration landfalling ARs was 0.1–0.2 higher than weaker and shorter duration events at lead times of 3–9 days. The potential for these forecasts to enhance situational awareness may also be improved, depending on individual applications, by allowing for flexibility in the location and time of verification; the success ratios increased 10%–30% at lead times of 5–10 days allowing for flexibility of ±1.0° latitude and ±6 h in verification.

Restricted access
Craig S. Schwartz, Glen S. Romine, and David C. Dowell

Abstract

Using the Weather Research and Forecasting Model, 80-member ensemble Kalman filter (EnKF) analyses with 3-km horizontal grid spacing were produced over the entire conterminous United States (CONUS) for 4 weeks using 1-h continuous cycling. For comparison, similarly configured EnKF analyses with 15-km horizontal grid spacing were also produced. At 0000 UTC, 15- and 3-km EnKF analyses initialized 36-h, 3-km, 10-member ensemble forecasts that were verified with a focus on precipitation. Additionally, forecasts were initialized from operational Global Ensemble Forecast System (GEFS) initial conditions (ICs) and experimental “blended” ICs produced by combining large scales from GEFS ICs with small scales from EnKF analyses using a low-pass filter. The EnKFs had stable climates with generally small biases, and precipitation forecasts initialized from 3-km EnKF analyses were more skillful and reliable than those initialized from downscaled GEFS and 15-km EnKF ICs through 12–18 and 6–12 h, respectively. Conversely, after 18 h, GEFS-initialized precipitation forecasts were better than EnKF-initialized precipitation forecasts. Blended 3-km ICs reflected the respective strengths of both GEFS and high-resolution EnKF ICs and yielded the best performance considering all times: blended 3-km ICs led to short-term forecasts with similar or better skill and reliability than those initialized from unblended 3-km EnKF analyses and ~18–36-h forecasts possessing comparable quality as GEFS-initialized forecasts. This work likely represents the first time a convection-allowing EnKF has been continuously cycled over a region as large as the entire CONUS, and results suggest blending high-resolution EnKF analyses with low-resolution global fields can potentially unify short-term and next-day convection-allowing ensemble forecast systems under a common framework.

Restricted access