Browse
Abstract
This paper introduces a new tool for verifying tropical cyclone (TC) forecasts. Tropical cyclone forecasts made by operational centers and by numerical weather prediction (NWP) models have been objectively verified for decades. Typically, the mean absolute error (MAE) and/or MAE skill are calculated relative to values within the operational center’s best track. Yet, the MAE can be strongly influenced by outliers and yield misleading results. Thus, this paper introduces an assessment of consistency between the MAE skill as well as two other measures of forecast performance. This “consistency metric” objectively evaluates the forecast-error evolution as a function of lead time based on thresholds applied to the 1) MAE skill; 2) the frequency of superior performance (FSP), which indicates how often one forecast outperforms another; and 3) median absolute error (MDAE) skill. The utility and applicability of the consistency metric is validated by applying it to four research and forecasting applications. Overall, this consistency metric is a helpful tool to guide analysis and increase confidence in results in a straightforward way. By augmenting the commonly used MAE and MAE skill with this consistency metric and creating a single scorecard with consistency metric results for TC track, intensity, and significant-wind-radii forecasts, the impact of observing systems, new modeling systems, or model upgrades on TC-forecast performance can be evaluated both holistically and succinctly. This could in turn help forecasters learn from challenging cases and accelerate and optimize developments and upgrades in NWP models.
Significance Statement
Evaluating the impact of observing systems, new modeling systems, or model upgrades on TC forecasts is vital to ensure more rapid and accurate implementations and optimizations. To do so, errors between model forecasts and observed TC parameters are calculated. Historically, analyzing these errors heavily relied on using one or two metrics: mean absolute errors (MAE) and/or MAE skill. Yet, doing so can lead to misleading conclusions if the error distributions are skewed, which often occurs (e.g., a poorly forecasted TC). This paper presents a new, straightforward way to combine useful information from several different metrics to enable a more holistic assessment of forecast errors when assessing the MAE and MAE skill.
Abstract
This paper introduces a new tool for verifying tropical cyclone (TC) forecasts. Tropical cyclone forecasts made by operational centers and by numerical weather prediction (NWP) models have been objectively verified for decades. Typically, the mean absolute error (MAE) and/or MAE skill are calculated relative to values within the operational center’s best track. Yet, the MAE can be strongly influenced by outliers and yield misleading results. Thus, this paper introduces an assessment of consistency between the MAE skill as well as two other measures of forecast performance. This “consistency metric” objectively evaluates the forecast-error evolution as a function of lead time based on thresholds applied to the 1) MAE skill; 2) the frequency of superior performance (FSP), which indicates how often one forecast outperforms another; and 3) median absolute error (MDAE) skill. The utility and applicability of the consistency metric is validated by applying it to four research and forecasting applications. Overall, this consistency metric is a helpful tool to guide analysis and increase confidence in results in a straightforward way. By augmenting the commonly used MAE and MAE skill with this consistency metric and creating a single scorecard with consistency metric results for TC track, intensity, and significant-wind-radii forecasts, the impact of observing systems, new modeling systems, or model upgrades on TC-forecast performance can be evaluated both holistically and succinctly. This could in turn help forecasters learn from challenging cases and accelerate and optimize developments and upgrades in NWP models.
Significance Statement
Evaluating the impact of observing systems, new modeling systems, or model upgrades on TC forecasts is vital to ensure more rapid and accurate implementations and optimizations. To do so, errors between model forecasts and observed TC parameters are calculated. Historically, analyzing these errors heavily relied on using one or two metrics: mean absolute errors (MAE) and/or MAE skill. Yet, doing so can lead to misleading conclusions if the error distributions are skewed, which often occurs (e.g., a poorly forecasted TC). This paper presents a new, straightforward way to combine useful information from several different metrics to enable a more holistic assessment of forecast errors when assessing the MAE and MAE skill.
Abstract
The sea surface temperature anomaly (SSTA) plays a key role in climate change and extreme weather processes. Usually, SSTA forecast methods consist of numerical and conventional statistical models, and the former can be seriously influenced by the uncertainty of physical parameterization schemes, the nonlinearity of ocean dynamic processes, and the nonrobustness of numerical discretization algorithms. Recently, deep learning has been explored to address forecast issues in the field of oceanography. However, existing deep learning models for ocean forecasting are mainly site specific, which were designed for forecasting on a single point or for an independent variable. Moreover, few special deep learning networks have been developed to deal with SSTA field forecasts under typhoon conditions. In this study, a multivariable convolutional neural network (MCNN) is proposed, which can be applied for synoptic-scale SSTA forecasting in the South China Sea. In addition to the SSTA itself, the surface wind speed and the surface current velocity are regarded as input variables for the prediction networks, effectively reflecting the influences of both local atmospheric dynamic forcing and nonlocal oceanic thermal advection. Experimental results demonstrate that MCNN exhibits better performance than a single-variable convolutional neural network (SCNN), especially for the SSTA forecast during the typhoon passage. While forecast results deteriorate rapidly in the SCNN during the passage of a typhoon, forecast errors in the MCNN can be effectively restrained to slowly increase over the forecast time due to the introduction of the surface wind speed in this network.
Abstract
The sea surface temperature anomaly (SSTA) plays a key role in climate change and extreme weather processes. Usually, SSTA forecast methods consist of numerical and conventional statistical models, and the former can be seriously influenced by the uncertainty of physical parameterization schemes, the nonlinearity of ocean dynamic processes, and the nonrobustness of numerical discretization algorithms. Recently, deep learning has been explored to address forecast issues in the field of oceanography. However, existing deep learning models for ocean forecasting are mainly site specific, which were designed for forecasting on a single point or for an independent variable. Moreover, few special deep learning networks have been developed to deal with SSTA field forecasts under typhoon conditions. In this study, a multivariable convolutional neural network (MCNN) is proposed, which can be applied for synoptic-scale SSTA forecasting in the South China Sea. In addition to the SSTA itself, the surface wind speed and the surface current velocity are regarded as input variables for the prediction networks, effectively reflecting the influences of both local atmospheric dynamic forcing and nonlocal oceanic thermal advection. Experimental results demonstrate that MCNN exhibits better performance than a single-variable convolutional neural network (SCNN), especially for the SSTA forecast during the typhoon passage. While forecast results deteriorate rapidly in the SCNN during the passage of a typhoon, forecast errors in the MCNN can be effectively restrained to slowly increase over the forecast time due to the introduction of the surface wind speed in this network.
Abstract
Subseasonal forecasts have recently attracted widespread interest yet remain a challenging issue. A statistical Kalman filter pattern projection method (KFPPM), which absorbs the projection conception of the raw covariance pattern projection (COVPPM) and the adaptive adjustments of the Kalman filter, is proposed to calibrate the single-model forecasts of the daily maximum and minimum temperatures (Tmax and Tmin) for lead times of 8–42 days over East Asia in 2018 derived from the UKMO control (CTL) forecast. The Kalman filter–based gridly calibration (KFGC) is carried out in parallel as a benchmark, which could improve the forecast skills to a certain extent. The COVPPM effectively calibrates the temperature forecasts at the early stage and displays better performances than the CTL and KFGC. However, with the growing lead times, it shows speedily decreasing skills and can no longer produce positive adjustments over the areas outside the plateaus. By contrast, the KFPPM consistently outperforms the other calibrations and reduces the forecast errors by almost 1.0° and 0.5°C for Tmax and Tmin, respectively, both retaining superiorities to the random climatology benchmark till the lead time of 24 days. The optimization of KFPPM maintains throughout the whole range of the subseasonal time scale, showing the most conspicuous improvements distributed over the Tibetan Plateau and its surroundings. Though the postprocessing procedures are more skillful in calibrating Tmax forecasts than Tmin forecasts, the Tmax forecasts are still characterized by lower skills than the latter. Case experiments further demonstrate the abovementioned features and imply the potential capability of KFPPM in improving forecast skills and disaster preventions for extreme temperature events.
Abstract
Subseasonal forecasts have recently attracted widespread interest yet remain a challenging issue. A statistical Kalman filter pattern projection method (KFPPM), which absorbs the projection conception of the raw covariance pattern projection (COVPPM) and the adaptive adjustments of the Kalman filter, is proposed to calibrate the single-model forecasts of the daily maximum and minimum temperatures (Tmax and Tmin) for lead times of 8–42 days over East Asia in 2018 derived from the UKMO control (CTL) forecast. The Kalman filter–based gridly calibration (KFGC) is carried out in parallel as a benchmark, which could improve the forecast skills to a certain extent. The COVPPM effectively calibrates the temperature forecasts at the early stage and displays better performances than the CTL and KFGC. However, with the growing lead times, it shows speedily decreasing skills and can no longer produce positive adjustments over the areas outside the plateaus. By contrast, the KFPPM consistently outperforms the other calibrations and reduces the forecast errors by almost 1.0° and 0.5°C for Tmax and Tmin, respectively, both retaining superiorities to the random climatology benchmark till the lead time of 24 days. The optimization of KFPPM maintains throughout the whole range of the subseasonal time scale, showing the most conspicuous improvements distributed over the Tibetan Plateau and its surroundings. Though the postprocessing procedures are more skillful in calibrating Tmax forecasts than Tmin forecasts, the Tmax forecasts are still characterized by lower skills than the latter. Case experiments further demonstrate the abovementioned features and imply the potential capability of KFPPM in improving forecast skills and disaster preventions for extreme temperature events.
Abstract
This study comprehensively assesses the overall impact of dropsondes on tropical cyclone (TC) forecasts. We compare two experiments to quantify dropsonde impact: one that assimilated and another that denied dropsonde observations. These experiments used a basin-scale, multistorm configuration of the Hurricane Weather Research and Forecasting Model (HWRF) and covered active North Atlantic basin periods during the 2017–20 hurricane seasons. The importance of a sufficiently large sample size as well as thoroughly understanding the error distribution by stratifying results are highlighted by this work. Overall, dropsondes directly improved forecasts during sampled periods and indirectly impacted forecasts during unsampled periods. Benefits for forecasts of track, intensity, and outer wind radii were more pronounced during sampled periods. The forecast improvements of outer wind radii were most notable given the impact that TC size has on TC-hazards forecasts. Additionally, robustly observing the inner- and near-core region was necessary for hurricane-force wind radii forecasts. Yet, these benefits were heavily dependent on the data assimilation (DA) system quality. More specifically, dropsondes only improved forecasts when the analysis used mesoscale error covariance derived from a cycled HWRF ensemble, suggesting that it is a vital DA component. Further, while forecast improvements were found regardless of initial classification and in steady-state TCs, TCs undergoing an intensity change had diminished benefits. The diminished benefits during intensity change probably reflect continued DA deficiencies. Thus, improving DA system quality and observing system limitations would likely enhance dropsonde impacts.
Significance Statement
This study uses a regional hurricane model to conduct the most comprehensive assessment of the impact of dropsondes on tropical cyclone (TC) forecasts to date. The main finding is that dropsondes can improve many aspects of TC forecasts if their data are assimilated with sufficiently advanced assimilation techniques. Particularly notable is the impact of dropsondes on TC outer-wind-radii forecasts, since improving those forecasts leads to more effective TC-hazard forecasts.
Abstract
This study comprehensively assesses the overall impact of dropsondes on tropical cyclone (TC) forecasts. We compare two experiments to quantify dropsonde impact: one that assimilated and another that denied dropsonde observations. These experiments used a basin-scale, multistorm configuration of the Hurricane Weather Research and Forecasting Model (HWRF) and covered active North Atlantic basin periods during the 2017–20 hurricane seasons. The importance of a sufficiently large sample size as well as thoroughly understanding the error distribution by stratifying results are highlighted by this work. Overall, dropsondes directly improved forecasts during sampled periods and indirectly impacted forecasts during unsampled periods. Benefits for forecasts of track, intensity, and outer wind radii were more pronounced during sampled periods. The forecast improvements of outer wind radii were most notable given the impact that TC size has on TC-hazards forecasts. Additionally, robustly observing the inner- and near-core region was necessary for hurricane-force wind radii forecasts. Yet, these benefits were heavily dependent on the data assimilation (DA) system quality. More specifically, dropsondes only improved forecasts when the analysis used mesoscale error covariance derived from a cycled HWRF ensemble, suggesting that it is a vital DA component. Further, while forecast improvements were found regardless of initial classification and in steady-state TCs, TCs undergoing an intensity change had diminished benefits. The diminished benefits during intensity change probably reflect continued DA deficiencies. Thus, improving DA system quality and observing system limitations would likely enhance dropsonde impacts.
Significance Statement
This study uses a regional hurricane model to conduct the most comprehensive assessment of the impact of dropsondes on tropical cyclone (TC) forecasts to date. The main finding is that dropsondes can improve many aspects of TC forecasts if their data are assimilated with sufficiently advanced assimilation techniques. Particularly notable is the impact of dropsondes on TC outer-wind-radii forecasts, since improving those forecasts leads to more effective TC-hazard forecasts.
Abstract
The National Weather Service plays a critical role in alerting the public when dangerous weather occurs. Tornado warnings are one of the most publicly visible products the NWS issues given the large societal impacts tornadoes can have. Understanding the performance of these warnings is crucial for providing adequate warning during tornadic events and improving overall warning performance. This study aims to understand warning performance during the lifetimes of individual storms (specifically in terms of probability of detection and lead time). For example, does probability of detection vary based on if the tornado was the first produced by the storm, or the last? We use tornado outbreak data from 2008 to 2014, archived NEXRAD radar data, and the NWS verification database to associate each tornado report with a storm object. This approach allows for an analysis of warning performance based on the chronological order of tornado occurrence within each storm. Results show that the probability of detection and lead time increase with later tornadoes in the storm; the first tornadoes of each storm are less likely to be warned and on average have less lead time. Probability of detection also decreases overnight, especially for first tornadoes and storms that only produce one tornado. These results are important for understanding how tornado warning performance varies during individual storm life cycles and how upstream forecast products (e.g., Storm Prediction Center tornado watches, mesoscale discussions, etc.) may increase warning confidence for the first tornado produced by each storm.
Significance Statement
In this study, we focus on better understanding real-time tornado warning performance on a storm-by-storm basis. This approach allows us to examine how warning performance can change based on the order of each tornado within its parent storm. Using tornado reports, warning products, and radar data during tornado outbreaks from 2008 to 2014, we find that probability of detection and lead time increase with later tornadoes produced by the same storm. In other words, for storms that produce multiple tornadoes, the first tornado is generally the least likely to be warned in advance; when it is warned in advance, it generally contains less lead time than subsequent tornadoes. These findings provide important new analyses of tornado warning performance, particularly for the first tornado of each storm, and will help inform strategies for improving warning performance.
Abstract
The National Weather Service plays a critical role in alerting the public when dangerous weather occurs. Tornado warnings are one of the most publicly visible products the NWS issues given the large societal impacts tornadoes can have. Understanding the performance of these warnings is crucial for providing adequate warning during tornadic events and improving overall warning performance. This study aims to understand warning performance during the lifetimes of individual storms (specifically in terms of probability of detection and lead time). For example, does probability of detection vary based on if the tornado was the first produced by the storm, or the last? We use tornado outbreak data from 2008 to 2014, archived NEXRAD radar data, and the NWS verification database to associate each tornado report with a storm object. This approach allows for an analysis of warning performance based on the chronological order of tornado occurrence within each storm. Results show that the probability of detection and lead time increase with later tornadoes in the storm; the first tornadoes of each storm are less likely to be warned and on average have less lead time. Probability of detection also decreases overnight, especially for first tornadoes and storms that only produce one tornado. These results are important for understanding how tornado warning performance varies during individual storm life cycles and how upstream forecast products (e.g., Storm Prediction Center tornado watches, mesoscale discussions, etc.) may increase warning confidence for the first tornado produced by each storm.
Significance Statement
In this study, we focus on better understanding real-time tornado warning performance on a storm-by-storm basis. This approach allows us to examine how warning performance can change based on the order of each tornado within its parent storm. Using tornado reports, warning products, and radar data during tornado outbreaks from 2008 to 2014, we find that probability of detection and lead time increase with later tornadoes produced by the same storm. In other words, for storms that produce multiple tornadoes, the first tornado is generally the least likely to be warned in advance; when it is warned in advance, it generally contains less lead time than subsequent tornadoes. These findings provide important new analyses of tornado warning performance, particularly for the first tornado of each storm, and will help inform strategies for improving warning performance.
Abstract
This study compares aerosol direct radiative effects on numerical weather forecasts made by the NCEP Global Forecast System (GFS) with two different aerosol datasets, the Optical Properties of Aerosols and Clouds (OPAC) and MERRA-2 aerosol climatologies. The underestimation of aerosol optical depth (AOD) by OPAC over northwest Africa, central to East Africa, the Arabian Peninsula, Southeast Asia, and the Indo-Gangetic Plain, and overestimation in the storm-track regions in both hemispheres are reduced by MERRA-2. Surface downward shortwave (SW) and longwave (LW) fluxes and the top-of-the-atmosphere SW and outgoing LW fluxes from model forecasts are compared with CERES satellite observations. Forecasts made with OPAC aerosols have large radiative flux biases, especially in northwest Africa and the storm-track regions. These biases are also reduced in the forecasts made with MERRA-2 aerosols. The improvements from MERRA-2 are most noticeable in the surface downward SW fluxes. GFS medium-range weather forecasts made with the MERRA-2 aerosols demonstrated slightly improved forecast accuracy of sea level pressure and precipitation over the Indian and East Asian summer monsoon region. A stronger Africa easterly jet is produced, associated with a low pressure over the east Atlantic Ocean and west of northwest Africa. Impacts on large-scale skill scores such as 500-hPa geopotential height anomaly correlation are generally positive in the Northern Hemisphere and the Pacific and North American regions in both the winter and summer seasons.
Abstract
This study compares aerosol direct radiative effects on numerical weather forecasts made by the NCEP Global Forecast System (GFS) with two different aerosol datasets, the Optical Properties of Aerosols and Clouds (OPAC) and MERRA-2 aerosol climatologies. The underestimation of aerosol optical depth (AOD) by OPAC over northwest Africa, central to East Africa, the Arabian Peninsula, Southeast Asia, and the Indo-Gangetic Plain, and overestimation in the storm-track regions in both hemispheres are reduced by MERRA-2. Surface downward shortwave (SW) and longwave (LW) fluxes and the top-of-the-atmosphere SW and outgoing LW fluxes from model forecasts are compared with CERES satellite observations. Forecasts made with OPAC aerosols have large radiative flux biases, especially in northwest Africa and the storm-track regions. These biases are also reduced in the forecasts made with MERRA-2 aerosols. The improvements from MERRA-2 are most noticeable in the surface downward SW fluxes. GFS medium-range weather forecasts made with the MERRA-2 aerosols demonstrated slightly improved forecast accuracy of sea level pressure and precipitation over the Indian and East Asian summer monsoon region. A stronger Africa easterly jet is produced, associated with a low pressure over the east Atlantic Ocean and west of northwest Africa. Impacts on large-scale skill scores such as 500-hPa geopotential height anomaly correlation are generally positive in the Northern Hemisphere and the Pacific and North American regions in both the winter and summer seasons.
Abstract
This study employs a long time series (1997–2017) of reforecasts based on a version of the ECMWF Integrated Forecast System to evaluate the dependence of medium-range (i.e., 3–15 days) precipitation forecast skill over California on the state of the large-scale atmospheric flow. As a basis for this evaluation, four recurrent large-scale flow regimes over the North Pacific and western North America associated with precipitation in a domain encompassing northern and central California were objectively identified in ECMWF ERA5 reanalysis data for November–March 1981–2017. Two of the regimes are characterized by zonal upper-level flow across the North Pacific, and the other two are characterized by wavy, blocked flow. Forecast verification statistics conditioned on regime occurrence indicate considerably lower medium-range precipitation skill over California in blocking regimes than in zonal regimes. Moreover, forecasts of blocking regimes tend to exhibit larger errors and uncertainty in the synoptic-scale flow over the eastern North Pacific and western North America compared with forecasts of zonal regimes. Composite analyses for blocking forecasts reveal a tendency for errors to develop in conjunction with the amplification of a ridge over the western and central North Pacific. The errors in the ridge tend to be communicated through the large-scale Rossby wave pattern, resulting in misforecasting of downstream trough amplification and, thereby, moisture flux and precipitation over California. The composites additionally indicate that error growth in the blocking ridge can be linked to misrepresentation of baroclinic development as well as upper-level divergent outflow associated with latent heat release.
Significance Statement
This study examines the degree to which the medium-range (out to ∼2-week lead time) precipitation forecast skill over California depends on the large-scale atmospheric flow regime over the North Pacific. An evaluation of retrospective model forecasts from ECMWF for 1997–2017 reveals that the skill tends to be considerably lower in regimes featuring a wavy, “blocked” North Pacific jet stream than in regimes featuring a west–east-oriented jet stream. This difference in skill relates to a tendency for forecasts of blocked regimes to exhibit significantly larger errors than forecasts of zonal regimes. The results could aid forecasters by increasing situational awareness and informing the interpretation and application of model forecasts for precipitation affecting California.
Abstract
This study employs a long time series (1997–2017) of reforecasts based on a version of the ECMWF Integrated Forecast System to evaluate the dependence of medium-range (i.e., 3–15 days) precipitation forecast skill over California on the state of the large-scale atmospheric flow. As a basis for this evaluation, four recurrent large-scale flow regimes over the North Pacific and western North America associated with precipitation in a domain encompassing northern and central California were objectively identified in ECMWF ERA5 reanalysis data for November–March 1981–2017. Two of the regimes are characterized by zonal upper-level flow across the North Pacific, and the other two are characterized by wavy, blocked flow. Forecast verification statistics conditioned on regime occurrence indicate considerably lower medium-range precipitation skill over California in blocking regimes than in zonal regimes. Moreover, forecasts of blocking regimes tend to exhibit larger errors and uncertainty in the synoptic-scale flow over the eastern North Pacific and western North America compared with forecasts of zonal regimes. Composite analyses for blocking forecasts reveal a tendency for errors to develop in conjunction with the amplification of a ridge over the western and central North Pacific. The errors in the ridge tend to be communicated through the large-scale Rossby wave pattern, resulting in misforecasting of downstream trough amplification and, thereby, moisture flux and precipitation over California. The composites additionally indicate that error growth in the blocking ridge can be linked to misrepresentation of baroclinic development as well as upper-level divergent outflow associated with latent heat release.
Significance Statement
This study examines the degree to which the medium-range (out to ∼2-week lead time) precipitation forecast skill over California depends on the large-scale atmospheric flow regime over the North Pacific. An evaluation of retrospective model forecasts from ECMWF for 1997–2017 reveals that the skill tends to be considerably lower in regimes featuring a wavy, “blocked” North Pacific jet stream than in regimes featuring a west–east-oriented jet stream. This difference in skill relates to a tendency for forecasts of blocked regimes to exhibit significantly larger errors than forecasts of zonal regimes. The results could aid forecasters by increasing situational awareness and informing the interpretation and application of model forecasts for precipitation affecting California.
Abstract
The impact of assimilating dropsonde data from the 2020 Atmospheric River (AR) Reconnaissance (ARR) field campaign on operational numerical weather forecasts was assessed. Two experiments were executed for the period from 24 January to 18 March 2020 using the National Centers for Environmental Prediction (NCEP) Global Forecast System version 15 (GFSv15) with a four-dimensional hybrid ensemble–variational (4DEnVar) data assimilation system. The control run (CTRL) used all of the routinely assimilated data and included data from 628 ARR dropsondes, whereas the denial run (DENY) excluded the dropsonde data. Results from 17 intensive observing periods (IOPs) indicate a mixed impact for mean sea level pressure and geopotential height over the Pacific–North American (PNA) region in CTRL compared to DENY. The overall local impact over the U.S. West Coast and Gulf of Alaska for the 17 IOPs is neutral (−0.45%) for integrated vapor transport (IVT), but positive for wind and moisture profiles (0.5%–1.0%), with a spectrum of statistically significant positive and negative impacts for various IOPs. The positive dropsonde data impact on precipitation forecasts over U.S. West Coast domains appears driven, in part, by improved low-level moisture and wind fields at short-forecast lead times. Indeed, data gaps, especially for accurate and unbiased moisture profiles and wind fields, can be at least partially mitigated to improve U.S. West Coast precipitation forecasts.
Abstract
The impact of assimilating dropsonde data from the 2020 Atmospheric River (AR) Reconnaissance (ARR) field campaign on operational numerical weather forecasts was assessed. Two experiments were executed for the period from 24 January to 18 March 2020 using the National Centers for Environmental Prediction (NCEP) Global Forecast System version 15 (GFSv15) with a four-dimensional hybrid ensemble–variational (4DEnVar) data assimilation system. The control run (CTRL) used all of the routinely assimilated data and included data from 628 ARR dropsondes, whereas the denial run (DENY) excluded the dropsonde data. Results from 17 intensive observing periods (IOPs) indicate a mixed impact for mean sea level pressure and geopotential height over the Pacific–North American (PNA) region in CTRL compared to DENY. The overall local impact over the U.S. West Coast and Gulf of Alaska for the 17 IOPs is neutral (−0.45%) for integrated vapor transport (IVT), but positive for wind and moisture profiles (0.5%–1.0%), with a spectrum of statistically significant positive and negative impacts for various IOPs. The positive dropsonde data impact on precipitation forecasts over U.S. West Coast domains appears driven, in part, by improved low-level moisture and wind fields at short-forecast lead times. Indeed, data gaps, especially for accurate and unbiased moisture profiles and wind fields, can be at least partially mitigated to improve U.S. West Coast precipitation forecasts.
Abstract
Inland flooding from landfalling tropical cyclones (TCs) is a major cause of death and damage to property and infrastructure worldwide. The mid-Atlantic region of the United States was devastated by Hurricane Irene and Tropical Storm Lee during late August–early September 2011, when the two storms produced sequential heavy rainfall and record flooding. Many rivers and streams reached their all-time record discharge to date. This study aims at 1) better understanding and predicting TC rainfall using various observed rainfall products and a high-resolution coupled atmosphere–wave–ocean model, namely, the Unified Wave Interface-Coupled Model (UWIN-CM), 2) characterizing inland flooding using streamflow data, and 3) improving prediction of TC-induced inland flooding using UWIN-CM and a machine learning K-nearest-neighbor (KNN) model. The results show that there is a wide range of uncertainty in satellite and radar–gauge-observed rainfall products in terms of rain-rate distribution and cumulative rainfall over the mid-Atlantic region. UWIN-CM rainfall is closer to the radar–gauge data than satellite data over land. Streamflow in most large rivers (>500 cfs) peaked after Lee, which reflects the sequential rainfall contributions of the two storms. The rainfall–streamflow–discharge response times were dependent on the size of the stream and the peak rain rates. To better predict rainfall and flooding, UWIN-CM and observed rainfall are used with the machine learning KNN regression model for prediction of severity of TC-induced inland flooding hazard. These results demonstrate the value of a stepped approach for rainfall and flood prediction toward a fully coupled atmosphere–ocean–land/hydrology model in the future.
Abstract
Inland flooding from landfalling tropical cyclones (TCs) is a major cause of death and damage to property and infrastructure worldwide. The mid-Atlantic region of the United States was devastated by Hurricane Irene and Tropical Storm Lee during late August–early September 2011, when the two storms produced sequential heavy rainfall and record flooding. Many rivers and streams reached their all-time record discharge to date. This study aims at 1) better understanding and predicting TC rainfall using various observed rainfall products and a high-resolution coupled atmosphere–wave–ocean model, namely, the Unified Wave Interface-Coupled Model (UWIN-CM), 2) characterizing inland flooding using streamflow data, and 3) improving prediction of TC-induced inland flooding using UWIN-CM and a machine learning K-nearest-neighbor (KNN) model. The results show that there is a wide range of uncertainty in satellite and radar–gauge-observed rainfall products in terms of rain-rate distribution and cumulative rainfall over the mid-Atlantic region. UWIN-CM rainfall is closer to the radar–gauge data than satellite data over land. Streamflow in most large rivers (>500 cfs) peaked after Lee, which reflects the sequential rainfall contributions of the two storms. The rainfall–streamflow–discharge response times were dependent on the size of the stream and the peak rain rates. To better predict rainfall and flooding, UWIN-CM and observed rainfall are used with the machine learning KNN regression model for prediction of severity of TC-induced inland flooding hazard. These results demonstrate the value of a stepped approach for rainfall and flood prediction toward a fully coupled atmosphere–ocean–land/hydrology model in the future.
Abstract
Forecast skill from dynamical forecast models decreases quickly with projection time due to various errors. Therefore, postprocessing methods, from simple bias correction methods to more complicated multiple linear regression–based model output statistics, are used to improve raw model forecasts. Usually, these methods show clear forecast improvement over the raw model forecasts, especially for short-range weather forecasts. However, linear approaches have limitations because the relationship between predictands and predictors may be nonlinear. This is even truer for extended range forecasts, such as week-3–4 forecasts. In this study, neural network techniques are used to seek or model the relationships between a set of predictors and predictands, and eventually to improve week-3–4 precipitation and 2-m temperature forecasts made by the NOAA/NCEP Climate Forecast System. Benefitting from advances in machine learning techniques in recent years, more flexible and capable machine learning algorithms and availability of big datasets enable us not only to explore nonlinear features or relationships within a given large dataset, but also to extract more sophisticated pattern relationships and covariabilities hidden within the multidimensional predictors and predictands. Then these more sophisticated relationships and high-level statistical information are used to correct the model week-3–4 precipitation and 2-m temperature forecasts. The results show that to some extent neural network techniques can significantly improve the week-3–4 forecast accuracy and greatly increase the efficiency over the traditional multiple linear regression methods.
Abstract
Forecast skill from dynamical forecast models decreases quickly with projection time due to various errors. Therefore, postprocessing methods, from simple bias correction methods to more complicated multiple linear regression–based model output statistics, are used to improve raw model forecasts. Usually, these methods show clear forecast improvement over the raw model forecasts, especially for short-range weather forecasts. However, linear approaches have limitations because the relationship between predictands and predictors may be nonlinear. This is even truer for extended range forecasts, such as week-3–4 forecasts. In this study, neural network techniques are used to seek or model the relationships between a set of predictors and predictands, and eventually to improve week-3–4 precipitation and 2-m temperature forecasts made by the NOAA/NCEP Climate Forecast System. Benefitting from advances in machine learning techniques in recent years, more flexible and capable machine learning algorithms and availability of big datasets enable us not only to explore nonlinear features or relationships within a given large dataset, but also to extract more sophisticated pattern relationships and covariabilities hidden within the multidimensional predictors and predictands. Then these more sophisticated relationships and high-level statistical information are used to correct the model week-3–4 precipitation and 2-m temperature forecasts. The results show that to some extent neural network techniques can significantly improve the week-3–4 forecast accuracy and greatly increase the efficiency over the traditional multiple linear regression methods.