Search Results
You are looking at 1 - 9 of 9 items for
- Author or Editor: Christopher C. Hennon x
- Refine by Access: All Content x
Abstract
A new dataset of tropical cloud clusters, which formed or propagated over the Atlantic basin during the 1998–2000 hurricane seasons, is used to develop a probabilistic prediction system for tropical cyclogenesis (TCG). Using data from the National Centers for Environmental Prediction (NCEP)–National Center for Atmospheric Research (NCAR) reanalysis (NNR), eight large-scale predictors are calculated at every 6-h interval of a cluster's life cycle. Discriminant analysis is then used to find a linear combination of the predictors that best separates the developing cloud clusters (those that became tropical depressions) and nondeveloping systems. Classification results are analyzed via composite and case study points of view.
Despite the linear nature of the classification technique, the forecast system yields useful probabilistic forecasts for the vast majority of the hurricane season. The daily genesis potential (DGP) and latitude predictors are found to be the most significant at nearly all forecast times. Composite results show that if the probability of development P < 0.7, TCG rarely occurs; if P > 0.9, genesis occurs about 40% of the time. A case study of Tropical Depression Keith (2000) illustrates the ability of the forecast system to detect the evolution of the large-scale environment from an unfavorable to favorable one. An additional case study of an early-season nondeveloping cluster demonstrates some of the shortcomings of the system and suggests possible ways of mitigating them.
Abstract
A new dataset of tropical cloud clusters, which formed or propagated over the Atlantic basin during the 1998–2000 hurricane seasons, is used to develop a probabilistic prediction system for tropical cyclogenesis (TCG). Using data from the National Centers for Environmental Prediction (NCEP)–National Center for Atmospheric Research (NCAR) reanalysis (NNR), eight large-scale predictors are calculated at every 6-h interval of a cluster's life cycle. Discriminant analysis is then used to find a linear combination of the predictors that best separates the developing cloud clusters (those that became tropical depressions) and nondeveloping systems. Classification results are analyzed via composite and case study points of view.
Despite the linear nature of the classification technique, the forecast system yields useful probabilistic forecasts for the vast majority of the hurricane season. The daily genesis potential (DGP) and latitude predictors are found to be the most significant at nearly all forecast times. Composite results show that if the probability of development P < 0.7, TCG rarely occurs; if P > 0.9, genesis occurs about 40% of the time. A case study of Tropical Depression Keith (2000) illustrates the ability of the forecast system to detect the evolution of the large-scale environment from an unfavorable to favorable one. An additional case study of an early-season nondeveloping cluster demonstrates some of the shortcomings of the system and suggests possible ways of mitigating them.
Abstract
We evaluate the short-term weather forecast performance of three flavors of artificial neural networks (NNs): feed forward back propagation, radial basis function, and generalized regression. To prepare the application of the NNs to an operational setting, we tune NN hyperparameters using over two years of historical data. Five objective guidance products serve as predictors to the NNs: North American Mesoscale and Global Forecast System model output statistics (MOS) forecasts, the High-Resolution Rapid Refresh (HRRR) model, National Weather Service forecasts, and the National Blend of Models product. We independently test NN performance using 96 real-time forecasts of temperature, wind, and precipitation across 11 U.S. cities made during the WxChallenge, a weather forecasting competition. We demonstrate that all NNs significantly improve short-range weather forecasts relative to the traditional objective guidance aids used to train the networks. For example, 1-day maximum and minimum temperature forecast error is 20%–30% lower than MOS. However, NN improvement over multiple linear regression for short-term forecasts is not significant. We suggest this may be attributed to the small number of training samples, the operational nature of the experiment, and the short forecast lead times. Regardless, our results are consistent with previous work suggesting that applying NNs to model forecasts can have a positive impact on operational forecast skill and will become valuable tools when integrated into the forecast enterprise.
Significance Statement
We used approximately two years of historical weather data and objective forecasts for a number of cities to tune a series of artificial neural networks (NNs) to forecast 1-day values of maximum and minimum temperature, maximum sustained wind speed, and quantitative precipitation. We compare forecast error against common objective guidance and multiple linear regression. We found that the NNs exhibit about 25% lower error than common objective guidance for temperature forecasting and 50% lower error for wind speed. Our results suggest that NNs will be a valuable contributor to improving weather forecast skill when adopted into the existing forecast enterprise.
Abstract
We evaluate the short-term weather forecast performance of three flavors of artificial neural networks (NNs): feed forward back propagation, radial basis function, and generalized regression. To prepare the application of the NNs to an operational setting, we tune NN hyperparameters using over two years of historical data. Five objective guidance products serve as predictors to the NNs: North American Mesoscale and Global Forecast System model output statistics (MOS) forecasts, the High-Resolution Rapid Refresh (HRRR) model, National Weather Service forecasts, and the National Blend of Models product. We independently test NN performance using 96 real-time forecasts of temperature, wind, and precipitation across 11 U.S. cities made during the WxChallenge, a weather forecasting competition. We demonstrate that all NNs significantly improve short-range weather forecasts relative to the traditional objective guidance aids used to train the networks. For example, 1-day maximum and minimum temperature forecast error is 20%–30% lower than MOS. However, NN improvement over multiple linear regression for short-term forecasts is not significant. We suggest this may be attributed to the small number of training samples, the operational nature of the experiment, and the short forecast lead times. Regardless, our results are consistent with previous work suggesting that applying NNs to model forecasts can have a positive impact on operational forecast skill and will become valuable tools when integrated into the forecast enterprise.
Significance Statement
We used approximately two years of historical weather data and objective forecasts for a number of cities to tune a series of artificial neural networks (NNs) to forecast 1-day values of maximum and minimum temperature, maximum sustained wind speed, and quantitative precipitation. We compare forecast error against common objective guidance and multiple linear regression. We found that the NNs exhibit about 25% lower error than common objective guidance for temperature forecasting and 50% lower error for wind speed. Our results suggest that NNs will be a valuable contributor to improving weather forecast skill when adopted into the existing forecast enterprise.
Abstract
The utility and shortcomings of near-real-time ocean surface vector wind retrievals from the NASA Quick Scatterometer (QuikSCAT) in operational forecast and analysis activities at the National Hurricane Center (NHC) are described. The use of QuikSCAT data in tropical cyclone (TC) analysis and forecasting for center location/identification, intensity (maximum sustained wind) estimation, and analysis of outer wind radii is presented, along with shortcomings of the data due to the effects of rain contamination and wind direction uncertainties. Automated QuikSCAT solutions in TCs often fail to show a closed circulation, and those that do are often biased to the southwest of the NHC best-track position. QuikSCAT winds show the greatest skill in TC intensity estimation in moderate to strong tropical storms. In tropical depressions, a positive bias in QuikSCAT winds is seen due to enhanced backscatter by rain, while in major hurricanes rain attenuation, resolution, and signal saturation result in a large negative bias in QuikSCAT intensity estimates.
QuikSCAT wind data help overcome the large surface data void in the analysis and forecast area of NHC’s Tropical Analysis and Forecast Branch (TAFB). These data have resulted in improved analyses of surface features, better definition of high wind areas, and improved forecasts of high-wind events. The development of a climatology of gap wind events in the Gulf of Tehuantepec has been possible due to QuikSCAT wind data in a largely data-void region.
The shortcomings of ocean surface vector winds from QuikSCAT in the operational environment at NHC are described, along with requirements for future ocean surface vector wind missions. These include improvements in the timeliness and quality of the data, increasing the wind speed range over which the data are reliable, and decreasing the impact of rain to allow for accurate retrievals in all-weather conditions.
Abstract
The utility and shortcomings of near-real-time ocean surface vector wind retrievals from the NASA Quick Scatterometer (QuikSCAT) in operational forecast and analysis activities at the National Hurricane Center (NHC) are described. The use of QuikSCAT data in tropical cyclone (TC) analysis and forecasting for center location/identification, intensity (maximum sustained wind) estimation, and analysis of outer wind radii is presented, along with shortcomings of the data due to the effects of rain contamination and wind direction uncertainties. Automated QuikSCAT solutions in TCs often fail to show a closed circulation, and those that do are often biased to the southwest of the NHC best-track position. QuikSCAT winds show the greatest skill in TC intensity estimation in moderate to strong tropical storms. In tropical depressions, a positive bias in QuikSCAT winds is seen due to enhanced backscatter by rain, while in major hurricanes rain attenuation, resolution, and signal saturation result in a large negative bias in QuikSCAT intensity estimates.
QuikSCAT wind data help overcome the large surface data void in the analysis and forecast area of NHC’s Tropical Analysis and Forecast Branch (TAFB). These data have resulted in improved analyses of surface features, better definition of high wind areas, and improved forecasts of high-wind events. The development of a climatology of gap wind events in the Gulf of Tehuantepec has been possible due to QuikSCAT wind data in a largely data-void region.
The shortcomings of ocean surface vector winds from QuikSCAT in the operational environment at NHC are described, along with requirements for future ocean surface vector wind missions. These include improvements in the timeliness and quality of the data, increasing the wind speed range over which the data are reliable, and decreasing the impact of rain to allow for accurate retrievals in all-weather conditions.
Abstract
A binary neural network classifier is evaluated against linear discriminant analysis within the framework of a statistical model for forecasting tropical cyclogenesis (TCG). A dataset consisting of potential developing cloud clusters that formed during the 1998–2001 Atlantic hurricane seasons is used in conjunction with eight large-scale predictors of TCG. Each predictor value is calculated at analysis time. The model yields 6–48-h probability forecasts for genesis at 6-h intervals. Results consistently show that the neural network classifier performs comparably to or better than linear discriminant analysis on all performance measures examined, including probability of detection, Heidke skill score, and forecast reliability. Two case studies are presented to investigate model performance and the feasibility of adapting the model to operational forecast use.
Abstract
A binary neural network classifier is evaluated against linear discriminant analysis within the framework of a statistical model for forecasting tropical cyclogenesis (TCG). A dataset consisting of potential developing cloud clusters that formed during the 1998–2001 Atlantic hurricane seasons is used in conjunction with eight large-scale predictors of TCG. Each predictor value is calculated at analysis time. The model yields 6–48-h probability forecasts for genesis at 6-h intervals. Results consistently show that the neural network classifier performs comparably to or better than linear discriminant analysis on all performance measures examined, including probability of detection, Heidke skill score, and forecast reliability. Two case studies are presented to investigate model performance and the feasibility of adapting the model to operational forecast use.
Abstract
The Cyclone Center project maintains a website that allows visitors to answer questions based on tropical cyclone satellite imagery. The goal is to provide a reanalysis of satellite-derived tropical cyclone characteristics from a homogeneous historical database composed of satellite imagery with a common spatial resolution for use in long-term, global analyses. The determination of the cyclone “type” (curved band, eye, shear, etc.) is a starting point for this process. This analysis shows how multiple classifications of a single image are combined to provide probabilities of a particular image’s type using an expectation–maximization (EM) algorithm. Analysis suggests that the project needs about 10 classifications of an image to adequately determine the storm type. The algorithm is capable of characterizing classifiers with varying levels of expertise, though the project needs about 200 classifications to quantify an individual’s precision. The EM classifications are compared with an objective algorithm, satellite fix data, and the classifications of a known classifier. The EM classifications compare well, with best agreement for eye and embedded center storm types and less agreement for shear and when convection is too weak (termed no-storm images). Both the EM algorithm and the known classifier showed similar tendencies when compared against an objective algorithm. The EM algorithm also fared well when compared to tropical cyclone fix datasets, having higher agreement with embedded centers and less agreement for eye images. The results were used to show the distribution of storm types versus wind speed during a storm’s lifetime.
Abstract
The Cyclone Center project maintains a website that allows visitors to answer questions based on tropical cyclone satellite imagery. The goal is to provide a reanalysis of satellite-derived tropical cyclone characteristics from a homogeneous historical database composed of satellite imagery with a common spatial resolution for use in long-term, global analyses. The determination of the cyclone “type” (curved band, eye, shear, etc.) is a starting point for this process. This analysis shows how multiple classifications of a single image are combined to provide probabilities of a particular image’s type using an expectation–maximization (EM) algorithm. Analysis suggests that the project needs about 10 classifications of an image to adequately determine the storm type. The algorithm is capable of characterizing classifiers with varying levels of expertise, though the project needs about 200 classifications to quantify an individual’s precision. The EM classifications are compared with an objective algorithm, satellite fix data, and the classifications of a known classifier. The EM classifications compare well, with best agreement for eye and embedded center storm types and less agreement for shear and when convection is too weak (termed no-storm images). Both the EM algorithm and the known classifier showed similar tendencies when compared against an objective algorithm. The EM algorithm also fared well when compared to tropical cyclone fix datasets, having higher agreement with embedded centers and less agreement for eye images. The results were used to show the distribution of storm types versus wind speed during a storm’s lifetime.
Abstract
An algorithm to detect and track global tropical cloud clusters (TCCs) is presented. TCCs are organized large areas of convection that form over warm tropical waters. TCCs are important because they are the “seedlings” that can evolve into tropical cyclones. A TCC satisfies the necessary condition of a “preexisting disturbance,” which provides the required latent heat release to drive the development of tropical cyclone circulations. The operational prediction of tropical cyclogenesis is poor because of weaknesses in the observational network and numerical models; thus, past studies have focused on identifying differences between “developing” (evolving into a tropical cyclone) and “nondeveloping” (failing to do so) TCCs in the global analysis fields to produce statistical forecasts of these events.
The algorithm presented here has been used to create a global dataset of all TCCs that formed from 1980 to 2008. Capitalizing on a global, Gridded Satellite (GridSat) infrared (IR) dataset, areas of persistent, intense convection are identified by analyzing characteristics of the IR brightness temperature (Tb ) fields. Identified TCCs are tracked as they move around their ocean basin (or cross into others); variables such as TCC size, location, convective intensity, cloud-top height, development status (i.e., developing or nondeveloping), and a movement vector are recorded in Network Common Data Form (NetCDF). The algorithm can be adapted to near-real-time tracking of TCCs, which could be of great benefit to the tropical cyclone forecast community.
Abstract
An algorithm to detect and track global tropical cloud clusters (TCCs) is presented. TCCs are organized large areas of convection that form over warm tropical waters. TCCs are important because they are the “seedlings” that can evolve into tropical cyclones. A TCC satisfies the necessary condition of a “preexisting disturbance,” which provides the required latent heat release to drive the development of tropical cyclone circulations. The operational prediction of tropical cyclogenesis is poor because of weaknesses in the observational network and numerical models; thus, past studies have focused on identifying differences between “developing” (evolving into a tropical cyclone) and “nondeveloping” (failing to do so) TCCs in the global analysis fields to produce statistical forecasts of these events.
The algorithm presented here has been used to create a global dataset of all TCCs that formed from 1980 to 2008. Capitalizing on a global, Gridded Satellite (GridSat) infrared (IR) dataset, areas of persistent, intense convection are identified by analyzing characteristics of the IR brightness temperature (Tb ) fields. Identified TCCs are tracked as they move around their ocean basin (or cross into others); variables such as TCC size, location, convective intensity, cloud-top height, development status (i.e., developing or nondeveloping), and a movement vector are recorded in Network Common Data Form (NetCDF). The algorithm can be adapted to near-real-time tracking of TCCs, which could be of great benefit to the tropical cyclone forecast community.
Abstract
The global tropical cyclone (TC) intensity record, even in modern times, is uncertain because the vast majority of storms are only observed remotely. Forecasters determine the maximum wind speed using a patchwork of sporadic observations and remotely sensed data. A popular tool that aids forecasters is the Dvorak technique—a procedural system that estimates the maximum wind based on cloud features in IR and/or visible satellite imagery. Inherently, the application of the Dvorak procedure is open to subjectivity. Heterogeneities are also introduced into the historical record with the evolution of operational procedures, personnel, and observing platforms. These uncertainties impede our ability to identify the relationship between tropical cyclone intensities and, for example, recent climate change.
A global reanalysis of TC intensity using experts is difficult because of the large number of storms. We will show that it is possible to effectively reanalyze the global record using crowdsourcing. Through modifying the Dvorak technique into a series of simple questions that amateurs (“citizen scientists”) can answer on a website, we are working toward developing a new TC dataset that resolves intensity discrepancies in several recent TCs. Preliminary results suggest that the performance of human classifiers in some cases exceeds that of an automated Dvorak technique applied to the same data for times when the storm is transitioning into a hurricane.
Abstract
The global tropical cyclone (TC) intensity record, even in modern times, is uncertain because the vast majority of storms are only observed remotely. Forecasters determine the maximum wind speed using a patchwork of sporadic observations and remotely sensed data. A popular tool that aids forecasters is the Dvorak technique—a procedural system that estimates the maximum wind based on cloud features in IR and/or visible satellite imagery. Inherently, the application of the Dvorak procedure is open to subjectivity. Heterogeneities are also introduced into the historical record with the evolution of operational procedures, personnel, and observing platforms. These uncertainties impede our ability to identify the relationship between tropical cyclone intensities and, for example, recent climate change.
A global reanalysis of TC intensity using experts is difficult because of the large number of storms. We will show that it is possible to effectively reanalyze the global record using crowdsourcing. Through modifying the Dvorak technique into a series of simple questions that amateurs (“citizen scientists”) can answer on a website, we are working toward developing a new TC dataset that resolves intensity discrepancies in several recent TCs. Preliminary results suggest that the performance of human classifiers in some cases exceeds that of an automated Dvorak technique applied to the same data for times when the storm is transitioning into a hurricane.
Abstract
Tropical cloud clusters (TCCs) are traditionally defined as synoptic-scale areas of deep convection and associated cirrus outflow. They play a critical role in the energy balance of the tropics, releasing large amounts of latent heat high in the troposphere. If conditions are favorable, TCCs can develop into tropical cyclones (TCs), which put coastal populations at risk. Previous work, usually connected with large field campaigns, has investigated TCC characteristics over small areas and time periods. Recently, developments in satellite reanalysis and global best track assimilation have allowed for the creation of a much more extensive database of TCC activity. The authors use the TCC database to produce an extensive global analysis of TCCs, focusing on TCC climatology, variability, and genesis productivity (GP) over a 28-yr period (1982–2009). While global TCC frequency was fairly consistent over the time period, with relatively small interannual variability and no noticeable trend, regional analyses show a high degree of interannual variability with clear trends in some regions. Approximately 1600 TCCs develop around the globe each year; about 6.4% of those develop into TCs. The eastern North Pacific Ocean (EPAC) basin produces the highest number of TCCs (per unit area) in a given year, but the western North Pacific Ocean (WPAC) basin has the highest GP (~12%). Annual TCC frequency in some basins exhibits a strong correlation to sea surface temperatures (SSTs), particularly in the EPAC, North Atlantic Ocean, and WPAC. However, GP is not as sensitive to SST, supporting the hypothesis that the tropical cyclogenesis process is most sensitive to atmospheric dynamical considerations such as vertical wind shear and large-scale vorticity.
Abstract
Tropical cloud clusters (TCCs) are traditionally defined as synoptic-scale areas of deep convection and associated cirrus outflow. They play a critical role in the energy balance of the tropics, releasing large amounts of latent heat high in the troposphere. If conditions are favorable, TCCs can develop into tropical cyclones (TCs), which put coastal populations at risk. Previous work, usually connected with large field campaigns, has investigated TCC characteristics over small areas and time periods. Recently, developments in satellite reanalysis and global best track assimilation have allowed for the creation of a much more extensive database of TCC activity. The authors use the TCC database to produce an extensive global analysis of TCCs, focusing on TCC climatology, variability, and genesis productivity (GP) over a 28-yr period (1982–2009). While global TCC frequency was fairly consistent over the time period, with relatively small interannual variability and no noticeable trend, regional analyses show a high degree of interannual variability with clear trends in some regions. Approximately 1600 TCCs develop around the globe each year; about 6.4% of those develop into TCs. The eastern North Pacific Ocean (EPAC) basin produces the highest number of TCCs (per unit area) in a given year, but the western North Pacific Ocean (WPAC) basin has the highest GP (~12%). Annual TCC frequency in some basins exhibits a strong correlation to sea surface temperatures (SSTs), particularly in the EPAC, North Atlantic Ocean, and WPAC. However, GP is not as sensitive to SST, supporting the hypothesis that the tropical cyclogenesis process is most sensitive to atmospheric dynamical considerations such as vertical wind shear and large-scale vorticity.
Geostationary satellites have provided routine, high temporal resolution Earth observations since the 1970s. Despite the long period of record, use of these data in climate studies has been limited for numerous reasons, among them that no central archive of geostationary data for all international satellites exists, full temporal and spatial resolution data are voluminous, and diverse calibration and navigation formats encumber the uniform processing needed for multisatellite climate studies. The International Satellite Cloud Climatology Project (ISCCP) set the stage for overcoming these issues by archiving a subset of the full-resolution geostationary data at ~10-km resolution at 3-hourly intervals since 1983. Recent efforts at NOAA's National Climatic Data Center to provide convenient access to these data include remapping the data to a standard map projection, recalibrating the data to optimize temporal homogeneity, extending the record of observations back to 1980, and reformatting the data for broad public distribution. The Gridded Satellite (GridSat) dataset includes observations from the visible, infrared window, and infrared water vapor channels. Data are stored in Network Common Data Format (netCDF) using standards that permit a wide variety of tools and libraries to process the data quickly and easily. A novel data layering approach, together with appropriate satellite and file metadata, allows users to access GridSat data at varying levels of complexity based on their needs. The result is a climate data record already in use by the meteorological community. Examples include reanalysis of tropical cyclones, studies of global precipitation, and detection and tracking of the intertropical convergence zone.
Geostationary satellites have provided routine, high temporal resolution Earth observations since the 1970s. Despite the long period of record, use of these data in climate studies has been limited for numerous reasons, among them that no central archive of geostationary data for all international satellites exists, full temporal and spatial resolution data are voluminous, and diverse calibration and navigation formats encumber the uniform processing needed for multisatellite climate studies. The International Satellite Cloud Climatology Project (ISCCP) set the stage for overcoming these issues by archiving a subset of the full-resolution geostationary data at ~10-km resolution at 3-hourly intervals since 1983. Recent efforts at NOAA's National Climatic Data Center to provide convenient access to these data include remapping the data to a standard map projection, recalibrating the data to optimize temporal homogeneity, extending the record of observations back to 1980, and reformatting the data for broad public distribution. The Gridded Satellite (GridSat) dataset includes observations from the visible, infrared window, and infrared water vapor channels. Data are stored in Network Common Data Format (netCDF) using standards that permit a wide variety of tools and libraries to process the data quickly and easily. A novel data layering approach, together with appropriate satellite and file metadata, allows users to access GridSat data at varying levels of complexity based on their needs. The result is a climate data record already in use by the meteorological community. Examples include reanalysis of tropical cyclones, studies of global precipitation, and detection and tracking of the intertropical convergence zone.