Browse

You are looking at 1 - 10 of 2,975 items for :

  • Weather and Forecasting x
  • Refine by Access: All Content x
Clear All
Li Jia, Fumin Ren, Chenchen Ding, and Mingyang Wang

Abstract

The Dynamical–Statistical–Analog Ensemble Forecast model for Landfalling Typhoon Precipitation (DSAEF_LTP) was developed as a supplementary method to numerical weather prediction (NWP). A successful strategy for improving the forecasting skill of the DSAEF_LTP model is to include as many relevant variables as possible in the generalized initial value (GIV) of this model. In this study, a new variable, TC translation speed, is incorporated into the DSAEF_LTP model, producing a new version of this model named DSAEF_LTP-4. Then, the best scheme of the model for South China is obtained by applying this model to the forecast of the accumulated rainfall of 13 landfalling tropical cyclones (LTCs) that occurred over South China during 2012–14. In addition, the forecast performance of the best scheme is estimated by forecast experiments with eight LTCs in 2015–16 over South China, and then compared to that of the other versions of the DSAEF_LTP model and three NWP models (i.e., ECMWF, GFS, and T639). Results show further the improved performance of the DSAEF_LTP-4 model in simulating precipitation of ≥250 and ≥100 mm. However, the forecast performance of DSAEF_LTP-4 is less satisfactory than DSAEF_LTP-2. This is mainly because of a large proportion of TCs with anomalous tracks and more sensitivity to the characteristics of experiment samples of DSAEF_LTP-4. Of significance is that the DSAEF_LTP model performs better than three NWP models for LTCs with typical tracks.

Significance Statement

The purpose of this study is to improve the performance of the Dynamical–Statistical–Analog Ensemble Forecast model for Landfalling Typhoon Precipitation (DSAEF_LTP) model by incorporating typhoon translation speed similarity. Compared with the dynamical models, which are more prone to misses, the DSAEF_LTP model is more prone to false alarms. The superiority of the DSAEF_LTP model shows especially in predicting the precipitation of TCs with typical tracks.

Restricted access
Yelena L. Pichugina, Robert M. Banta, W. Alan Brewer, J. Kenyon, J. B. Olson, D. D. Turner, J. Wilczak, S. Baidar, J. K. Lundquist, W. J. Shaw, and S. Wharton

Abstract

Model improvement efforts involve an evaluation of changes in model skill in response to changes in model physics and parameterization. When using wind measurements from various remote sensors to determine model forecast accuracy, it is important to understand the effects of measurement-uncertainty differences among the sensors resulting from differences in the methods of measurement, the vertical and temporal resolution of the measurements, and the spatial variability of these differences. Here we quantify instrument measurement variability in 80-m wind speed during WFIP2 and its impact on the calculated errors and the change in error from one model version to another. The model versions tested involved updates in model physics from HRRRv1 to HRRRv4, and reductions in grid interval from 3 km to 750 m. Model errors were found to be 2–3 m s−1. Differences in errors as determined by various instruments at each site amounted to about 10% of this value, or 0.2–0.3 m s−1. Changes in model skill due to physics or grid-resolution updates also differed depending on the instrument used to determine the errors; most of the instrument-to-instrument differences were ∼0.1 m s−1, but some reached 0.3 m s−1. All instruments at a given site mostly showed consistency in the sign of the change in error. In two examples, though, the sign changed, illustrating a consequence of differences in measurements: errors determined using one instrument may show improvement in model skill, whereas errors determined using another instrument may indicate degradation. This possibility underscores the importance of having accurate measurements to determine the model error.

Significance Statement

To evaluate model forecast accuracy using remote sensing instruments, it is important to understand the effects of measurement uncertainties due to differences in the methods of measurement and data processing techniques, the vertical and temporal resolution of the measurements, and the spatial variability of these differences. In this study, three types of collocated remote sensing systems are used to quantify the impact of measurement variability on the magnitude of calculated errors and the change in error from one model version to another. The model versions tested involved updates in model physics from HRRRv1 to HRRRv4, and reductions in grid interval from 3 km to 750 m.

Restricted access
Chen Zhao, Tim Li, and Mingyu Bi

Abstract

The Advanced Weather Research and Forecasting (WRF-ARW) model is used to investigate the influence of an easterly wave (EW) on the genesis of Typhoon Hagupit (2008) in the western North Pacific. Observational analysis indicates that the precursor disturbance of Typhoon Hagupit (2008) is an easterly wave (EW) in the western North Pacific, which can be detected at least seven days prior to the typhoon genesis. In the control experiment, the genesis of the typhoon is well captured. A sensitivity experiment is conducted by filtering out the synoptic-scale (3-8-day) signals associated with the EW. The absence of the EW eliminates the typhoon genesis. Two mechanisms are proposed regarding the effect of the EW on the genesis of Hagupit. Firstly, the background cyclonic vorticity of the EW could induce the small-scale cyclonic vorticities to merge and develop into a system-scale vortex. Secondly, the EW provides a favorable environment in situ for the rapid development of the typhoon disturbance through a positive moisture-convection feedback.

Restricted access
Bu-Yo Kim, Miloslav Belorid, and Joo Wan Cha

Abstract

Accurate visibility prediction is imperative in the interests of human and environmental health. However, the existing numerical models for visibility prediction are characterized by low prediction accuracy and high computational cost. Thus, in this study, we predicted visibility using tree-based machine learning algorithms and numerical weather prediction data determined by the local data assimilation and prediction system (LDAPS) of Korea Meteorological Administration. We then evaluated the accuracy of visibility prediction for Seoul, South Korea, through a comparative analysis using observed visibility from the automated synoptic observing system. The visibility predicted by machine learning algorithm was compared with the visibility predicted by LDAPS. The LDAPS data employed to construct the visibility prediction model were divided into learning, validation, and test sets. The optimal machine learning algorithm for visibility prediction was determined using the learning and validation sets. In this study, the extreme gradient boosting (XGB) algorithm showed the highest accuracy for visibility prediction. Comparative results using the test sets revealed lower prediction error and higher correlation coefficient for visibility predicted by the XGB algorithm (bias: −0.62 km, MAE: 2.04 km, RMSE: 2.94 km, and R: 0.88) than for that predicted by LDAPS (bias: −0.32 km, MAE: 4.66 km, RMSE: 6.48 km, and R: 0.40). Moreover, the mean equitable threat score (ETS) also indicated higher prediction accuracy for visibility predicted by the XGB algorithm (ETS: 0.5−0.6 for visibility ranges) than for that predicted by LDAPS (ETS: 0.1−0.2).

Restricted access
Edward J. Strobach

Abstract

Parameterizing boundary layer turbulence is a critical component of numerical weather prediction and the representation of turbulent mixing of momentum, heat, and other tracers. The components that make up a boundary layer scheme can vary considerably, with each scheme having a combination of processes that are physically represented along with tuning parameters that optimize performance. Isolating a component of a PBL scheme to examine its impact is essential for understanding the evolution of boundary layer profiles and their impact on the mean structure. In this study we conduct three experiments with the scale-aware TKE eddy-diffusivity mass-flux (sa-TKE-EDMF) scheme: 1) releasing the upper limit constraints placed on mixing lengths, 2) incrementally adjusting the tuning coefficient related to wind shear in the modified Bougeault and Lacarrere (BouLac) mixing length formulation, and 3) replacing the current mixing length formulations with those used in the MYNN scheme. A diagnostic approach is adopted to characterize the bulk representation of turbulence within the residual layer and boundary layer in order to understand the importance of different terms in the TKE budget as well as to assess how the balance of terms changes between mixing length formulations. Although our study does not seek to determine the best formulation, it was found that strong imbalances led to considerably different profile structures both in terms of the resolved and subgrid fields. Experiments where this balance was preserved showed a minor impact on the mean structure regardless of the turbulence generated. Overall, it was found that changes to mixing length formulations and/or constraints had stronger impacts during the day while remaining partially insensitive during the evening.

Restricted access
Jordan J. Laser, Michael C. Coniglio, Patrick S. Skinner, and Elizabeth N. Smith

Abstract

Observational data collection is extremely hazardous in supercell storm environments, which makes for a scarcity of data used for evaluating the storm-scale guidance from convection allowing models (CAMs) like the National Oceanic and Atmospheric Administration (NOAA) Warn-on-Forecast System (WoFS). The Targeted Observations with UAS and Radar of Supercells (TORUS) 2019 field mission provided a rare opportunity to not only collect these observations, but to do so with advanced technology: vertically pointing Doppler lidar. One standing question for WoFS is how the system forecasts the feedback between supercells and their near-storm environment. The lidar can observe vertical profiles of wind over time, creating unique datasets to compare to WoFS kinematic predictions in rapidly evolving severe weather environments. Mobile radiosonde data are also presented to provide a thermodynamic comparison. The five lidar deployments (three of which observed tornadic supercells) analyzed show WoFS accurately predicted general kinematic trends in the inflow environment; however, the predicted feedback between the supercell and its environment, which resulted in enhanced inflow and larger storm-relative helicity (SRH), were muted relative to observations. The radiosonde observations reveal an overprediction of CAPE in WoFS forecasts, both in the near and far field, with an inverse relationship between the CAPE errors and distance from the storm.

Significance Statement

It is difficult to evaluate the accuracy of weather prediction model forecasts of severe thunderstorms because observations are rarely available near the storms. However, the TORUS 2019 field experiment collected multiple specialized observations in the near-storm environment of supercells, which are compared to the same near-storm environments predicted by the National Oceanic and Atmospheric Administration (NOAA) Warn-on-Forecast System (WoFS) to gauge its performance. Unique to this study is the use of mobile Doppler lidar observations in the evaluation; lidar can retrieve the horizontal winds in the few kilometers above ground on time scales of a few minutes. Using lidar and radiosonde observations in the near-storm environment of three tornadic supercells, we find that WoFS generally predicts the expected trends in the evolution of the near-storm wind profile, but the response is muted compared to observations. We also find an inverse relationship of errors in instability to distance from the storm. These results can aid model developers in refining model physics to better predict severe storms.

Restricted access
Cameron J. Nixon and John T. Allen

Abstract

Hodographs are valuable sources of pattern recognition in severe convective storm forecasting. Certain shapes are known to discriminate between single cell, multicell, and supercell storm organization. Various derived quantities such as storm-relative helicity (SRH) have been found to predict tornado potential and intensity. Over the years, collective research has established a conceptual model for tornadic hodographs (large and “looping,” with high SRH). However, considerably less attention has been given to constructing a similar conceptual model for hodographs of severe hail. This study explores how hodograph shape may differentiate between the environments of severe hail and tornadoes. While supercells are routinely assumed to carry the potential to produce all hazards, this is not always the case, and we explore why. The Storm Prediction Center (SPC) storm mode dataset is used to assess the environments of 8958 tornadoes and 7256 severe hail reports, produced by right- and left-moving supercells. Composite hodographs and indices to quantify wind shear are assessed for each hazard, and clear differences are found between the kinematic environments of hail-producing and tornadic supercells. The sensitivity of the hodograph to common thermodynamic variables was also examined, with buoyancy and moisture found to influence the shape associated with the hazards. The results suggest that differentiating between tornadic and hail-producing storms may be possible using properties of the hodograph alone. While anticipating hail size does not appear possible using only the hodograph, anticipating tornado intensity appears readily so. When coupled with buoyancy profiles, the hodograph may assist in differentiating between both hail size and tornado intensity.

Restricted access
Stephen J. Lord, Xingren Wu, Vijay Tallapragada, and F. M. Ralph

The impact of assimilating dropsonde data from the 2020 Atmospheric River (AR) Reconnaissance (ARR) field campaign on operational numerical precipitation forecasts was assessed. Two experiments were executed for the period from 24 January to 18 March 2020 using the NCEP Global Forecast System version 15 (GFSv15) with a four-dimensional hybrid ensemble-variational (4DEnVar) data assimilation system. The control run (CTRL) used all the routinely assimilated data and included ARR dropsonde data, whereas the denial run (DENY) excluded the dropsonde data. There were 17 Intensive Observing Periods (IOPs) totaling 46 Air Force C-130 and 16 NOAA G-IV missions to deploy dropsondes over targeted regions with potential for downstream high-impact weather associated with the ARs. Data from a total of 628 dropsondes were assimilated in the CTRL. The dropsonde data impact on precipitation forecasts over U.S. West Coast domains is largely positive, especially for day 5 lead time, and appears driven by different model variables on a case-by-case basis. These results suggest that data gaps associated with ARs can be addressed with targeted ARR field campaigns providing vital observations needed for improving U.S. West Coast precipitation forecasts.

Restricted access
Adam J. Clark and Eric D. Loken

Abstract

Severe weather probabilities are derived from the Warn-on-Forecast System (WoFS) run by NOAA’s National Severe Storms Laboratory (NSSL) during spring 2018 using the random forest (RF) machine learning algorithm. Recent work has shown this method generates skillful and reliable forecasts when applied to convection-allowing model ensembles for the “Day 1” time range (i.e., 12–36-h lead times), but it has been tested in only one other study for lead times relevant to WoFS (e.g., 0–6 h). Thus, in this paper, various sets of WoFS predictors, which include both environment and storm-based fields, are input into a RF algorithm and trained using the occurrence of severe weather reports within 39 km of a point to produce severe weather probabilities at 0–3-h lead times. We analyze the skill and reliability of these forecasts, sensitivity to different sets of predictors, and avenues for further improvements. The RF algorithm produced very skillful and reliable severe weather probabilities and significantly outperformed baseline probabilities calculated by finding the best performing updraft helicity (UH) threshold and smoothing parameter. Experiments where different sets of predictors were used to derive RF probabilities revealed 1) storm attribute fields contributed significantly more skill than environmental fields, 2) 2–5 km AGL UH and maximum updraft speed were the best performing storm attribute fields, 3) the most skillful ensemble summary metric was a smoothed mean, and 4) the most skillful forecasts were obtained when smoothed UH from individual ensemble members were used as predictors.

Restricted access
Free access