Browse
Abstract
Tropical cyclone (TC) genesis forecasts during 2018–20 from two operational global ensemble prediction systems (EPSs) are evaluated over three basins in this study. The two ensembles are from the European Centre for Medium-Range Weather Forecasts (ECMWF-EPS) and the MetOffice in the United Kingdom (UKMO-EPS). The three basins include the northwest Pacific, northeast Pacific, and the North Atlantic. It is found that the ensemble members in each EPS show a good level of agreement in forecast skill, but their forecasts are complementary. Probability of detection (POD) can be doubled by taking all the member forecasts in the EPS into account. Even if an ensemble member does not make a hit forecast, it may predict the presence of cyclonic vortices. Statistically, a hit forecast has more nearby disturbance forecasts in the ensemble than a false alarm. Based on the above analysis, we grouped the nearby forecasts at each model initialization time to define ensemble genesis forecasts, and verified these forecasts to represent the performance of the ensemble system. The PODs are found to be more than twice that of the individual ensemble members at most lead times, which is about 59% and 38% at the 5-day lead time in UKMO-EPS and ECMWF-EPS, respectively; while the success ratios are smaller compared with that of the ensemble members. In addition, predictability differs in different basins, and genesis events in the North Atlantic basin are the most difficult to forecast in EPS, and its POD at the 5-day lead time is only 46% and 23% in UKMO-EPS and ECMWF-EPS, respectively.
Significance Statement
Operational forecasting of tropical cyclone (TC) genesis relies greatly on numerical models. Compared with deterministic forecasts, ensemble prediction systems (EPSs) can provide uncertainty information for forecasters. This study examined the predictability of TC genesis in two operational EPSs. We found that the forecasts of ensemble members complement each other, and the detection ratio of observed genesis will be doubled by considering the forecasts of all members, as multiple simulations conducted by the EPS partially reflect the inherent uncertainties of the genesis process. Successful forecasts are surrounded by more cyclonic vortices in the ensemble than false alarms, so the vortex information is used to group the nearby forecasts at each model initialization to define ensemble genesis forecasts when evaluating the ensemble performance. The results demonstrate that the global ensemble models can serve as a valuable reference for TC genesis forecasting.
Abstract
Tropical cyclone (TC) genesis forecasts during 2018–20 from two operational global ensemble prediction systems (EPSs) are evaluated over three basins in this study. The two ensembles are from the European Centre for Medium-Range Weather Forecasts (ECMWF-EPS) and the MetOffice in the United Kingdom (UKMO-EPS). The three basins include the northwest Pacific, northeast Pacific, and the North Atlantic. It is found that the ensemble members in each EPS show a good level of agreement in forecast skill, but their forecasts are complementary. Probability of detection (POD) can be doubled by taking all the member forecasts in the EPS into account. Even if an ensemble member does not make a hit forecast, it may predict the presence of cyclonic vortices. Statistically, a hit forecast has more nearby disturbance forecasts in the ensemble than a false alarm. Based on the above analysis, we grouped the nearby forecasts at each model initialization time to define ensemble genesis forecasts, and verified these forecasts to represent the performance of the ensemble system. The PODs are found to be more than twice that of the individual ensemble members at most lead times, which is about 59% and 38% at the 5-day lead time in UKMO-EPS and ECMWF-EPS, respectively; while the success ratios are smaller compared with that of the ensemble members. In addition, predictability differs in different basins, and genesis events in the North Atlantic basin are the most difficult to forecast in EPS, and its POD at the 5-day lead time is only 46% and 23% in UKMO-EPS and ECMWF-EPS, respectively.
Significance Statement
Operational forecasting of tropical cyclone (TC) genesis relies greatly on numerical models. Compared with deterministic forecasts, ensemble prediction systems (EPSs) can provide uncertainty information for forecasters. This study examined the predictability of TC genesis in two operational EPSs. We found that the forecasts of ensemble members complement each other, and the detection ratio of observed genesis will be doubled by considering the forecasts of all members, as multiple simulations conducted by the EPS partially reflect the inherent uncertainties of the genesis process. Successful forecasts are surrounded by more cyclonic vortices in the ensemble than false alarms, so the vortex information is used to group the nearby forecasts at each model initialization to define ensemble genesis forecasts when evaluating the ensemble performance. The results demonstrate that the global ensemble models can serve as a valuable reference for TC genesis forecasting.
Abstract
Producing an accurate and calibrated probabilistic forecast has high social and economic value. Systematic errors or biases in the ensemble weather forecast can be corrected by postprocessing models whose development is an urgent challenge. Traditionally, the bias correction is done by employing linear regression models that estimate the conditional probability distribution of the forecast. Although this model framework works well, it is restricted to a prespecified model form that often relies on a limited set of predictors only. Most machine learning (ML) methods can tackle these problems with a point prediction, but only a few of them can be applied effectively in a probabilistic manner. The tree-based ML techniques, namely, natural gradient boosting (NGB), quantile random forests (QRF), and distributional regression forests (DRF), are used to adjust hourly 2-m temperature ensemble prediction at lead times of 1–10 days. The ensemble model output statistics (EMOS) and its boosting version are used as benchmark models. The model forecast is based on the European Centre for Medium-Range Weather Forecasts (ECMWF) for the Czech Republic domain. Two training periods 2015–18 and 2018 only were used to learn the models, and their prediction skill was evaluated in 2019. The results show that the QRF and NGB methods provide the best performance for 1–2-day forecasts, while the EMOS method outperforms other methods for 8–10-day forecasts. Key components to improving short-term forecasting are additional atmospheric/surface state predictors and the 4-yr training sample size.
Significance Statement
Machine learning methods have great potential and are beginning to be widely applied in meteorology in recent years. A new technique called natural gradient boosting (NGB) has been released and used in this paper to refine the probabilistic forecast of surface temperature. It was found that the NGB has better prediction skills than the traditional ensemble model output statistics in forecasting 1 and 2 days in advance. The NGB has similar prediction skills with lower computational demands compared to other advanced machine learning methods such as the quantile random forests. We showed a path to employ the NGB method in this task, which can be followed for refining other and more challenging meteorological variables such as wind speed or precipitation.
Abstract
Producing an accurate and calibrated probabilistic forecast has high social and economic value. Systematic errors or biases in the ensemble weather forecast can be corrected by postprocessing models whose development is an urgent challenge. Traditionally, the bias correction is done by employing linear regression models that estimate the conditional probability distribution of the forecast. Although this model framework works well, it is restricted to a prespecified model form that often relies on a limited set of predictors only. Most machine learning (ML) methods can tackle these problems with a point prediction, but only a few of them can be applied effectively in a probabilistic manner. The tree-based ML techniques, namely, natural gradient boosting (NGB), quantile random forests (QRF), and distributional regression forests (DRF), are used to adjust hourly 2-m temperature ensemble prediction at lead times of 1–10 days. The ensemble model output statistics (EMOS) and its boosting version are used as benchmark models. The model forecast is based on the European Centre for Medium-Range Weather Forecasts (ECMWF) for the Czech Republic domain. Two training periods 2015–18 and 2018 only were used to learn the models, and their prediction skill was evaluated in 2019. The results show that the QRF and NGB methods provide the best performance for 1–2-day forecasts, while the EMOS method outperforms other methods for 8–10-day forecasts. Key components to improving short-term forecasting are additional atmospheric/surface state predictors and the 4-yr training sample size.
Significance Statement
Machine learning methods have great potential and are beginning to be widely applied in meteorology in recent years. A new technique called natural gradient boosting (NGB) has been released and used in this paper to refine the probabilistic forecast of surface temperature. It was found that the NGB has better prediction skills than the traditional ensemble model output statistics in forecasting 1 and 2 days in advance. The NGB has similar prediction skills with lower computational demands compared to other advanced machine learning methods such as the quantile random forests. We showed a path to employ the NGB method in this task, which can be followed for refining other and more challenging meteorological variables such as wind speed or precipitation.
Abstract
High winds are one of the key forecast challenges across southeast Wyoming. The complex mountainous terrain across the region frequently results in strong gap winds in localized areas, as well as more widespread bora and chinook winds in the winter season (October–March). The predictors and general weather patterns that result in strong winds across the region are well understood by local forecasters. However, no single predictor provides notable skill by itself in separating warning-level events from others. Random forest (RF) classifier models were developed to improve upon high wind prediction using a training dataset constructed of archived observations and model parameters from the North American Regional Reanalysis (NARR). Three locations were selected for initial RF model development, including the city of Cheyenne, Wyoming, and two gap regions along Interstate 80 (Arlington) and Interstate 25 (Bordeaux). Verification scores over two winters suggested the RF models were beneficial relative to current operational tools when predicting warning-criteria high wind events. Three case studies of high wind events provide examples of the RF models’ effectiveness to forecast operations over current forecast tools. The first case explores a classic, widespread high wind scenario, which was well anticipated by local forecasters. A more marginal scenario is explored in the second case, which presented greater forecast challenges relating to timing and intensity of the strongest winds. The final case study carefully uses Global Forecast System (GFS) data as input into the RF models, further supporting real-time implementation into forecast operations.
Abstract
High winds are one of the key forecast challenges across southeast Wyoming. The complex mountainous terrain across the region frequently results in strong gap winds in localized areas, as well as more widespread bora and chinook winds in the winter season (October–March). The predictors and general weather patterns that result in strong winds across the region are well understood by local forecasters. However, no single predictor provides notable skill by itself in separating warning-level events from others. Random forest (RF) classifier models were developed to improve upon high wind prediction using a training dataset constructed of archived observations and model parameters from the North American Regional Reanalysis (NARR). Three locations were selected for initial RF model development, including the city of Cheyenne, Wyoming, and two gap regions along Interstate 80 (Arlington) and Interstate 25 (Bordeaux). Verification scores over two winters suggested the RF models were beneficial relative to current operational tools when predicting warning-criteria high wind events. Three case studies of high wind events provide examples of the RF models’ effectiveness to forecast operations over current forecast tools. The first case explores a classic, widespread high wind scenario, which was well anticipated by local forecasters. A more marginal scenario is explored in the second case, which presented greater forecast challenges relating to timing and intensity of the strongest winds. The final case study carefully uses Global Forecast System (GFS) data as input into the RF models, further supporting real-time implementation into forecast operations.
Abstract
The impact of assimilating dropsonde data from the 2020 Atmospheric River (AR) Reconnaissance (ARR) field campaign on operational numerical precipitation forecasts was assessed. Two experiments were executed for the period from 24 January to 18 March 2020 using the NCEP Global Forecast System, version 15 (GFSv15), with a four-dimensional hybrid ensemble–variational (4DEnVar) data assimilation system. The control run (CTRL) used all the routinely assimilated data and included ARR dropsonde data, whereas the denial run (DENY) excluded the dropsonde data. There were 17 intensive observing periods (IOPs) totaling 46 Air Force C-130 and 16 NOAA G-IV missions to deploy dropsondes over targeted regions with potential for downstream high-impact weather associated with the ARs. Data from a total of 628 dropsondes were assimilated in the CTRL. The dropsonde data impact on precipitation forecasts over U.S. West Coast domains is largely positive, especially for day-5 lead time, and appears driven by different model variables on a case-by-case basis. These results suggest that data gaps associated with ARs can be addressed with targeted ARR field campaigns providing vital observations needed for improving U.S. West Coast precipitation forecasts.
Abstract
The impact of assimilating dropsonde data from the 2020 Atmospheric River (AR) Reconnaissance (ARR) field campaign on operational numerical precipitation forecasts was assessed. Two experiments were executed for the period from 24 January to 18 March 2020 using the NCEP Global Forecast System, version 15 (GFSv15), with a four-dimensional hybrid ensemble–variational (4DEnVar) data assimilation system. The control run (CTRL) used all the routinely assimilated data and included ARR dropsonde data, whereas the denial run (DENY) excluded the dropsonde data. There were 17 intensive observing periods (IOPs) totaling 46 Air Force C-130 and 16 NOAA G-IV missions to deploy dropsondes over targeted regions with potential for downstream high-impact weather associated with the ARs. Data from a total of 628 dropsondes were assimilated in the CTRL. The dropsonde data impact on precipitation forecasts over U.S. West Coast domains is largely positive, especially for day-5 lead time, and appears driven by different model variables on a case-by-case basis. These results suggest that data gaps associated with ARs can be addressed with targeted ARR field campaigns providing vital observations needed for improving U.S. West Coast precipitation forecasts.
Abstract
This study explores forecaster perceptions of emerging needs for probabilistic forecasting of winter weather hazards through a nationwide survey disseminated to National Weather Service (NWS) forecasters. Questions addressed four relevant thematic areas: 1) messaging timelines for specific hazards, 2) modeling needs, 3) current preparedness to interpret and communicate probabilistic winter information, and 4) winter forecasting tools. The results suggest that winter hazards are messaged on varying time scales that sometimes do not match the needs of stakeholders. Most participants responded favorably to the idea of incorporating new hazard-specific regional ensemble guidance to fill gaps in the winter forecasting process. Forecasters provided recommendations for ensemble run length and output frequencies that would be needed to capture individual winter hazards. Qualitatively, forecasters expressed more difficulties communicating, rather than interpreting, probabilistic winter hazard information. Differences in training and the need for social-science-driven practices were identified as a few of the drivers limiting forecasters’ ability to provide strategic winter messaging. In the future, forecasters are looking for new winter tools to address forecasting difficulties, enhance stakeholder partnerships, and also be useful to the local community. On the regional scale, an ensemble system could potentially accommodate these needs and provide specialized guidance on timing and sensitive/high-impact winter events.
Significance Statement
Probabilistic information gives forecasters the ability to see a range of potential outcomes so that they can know how much confidence to place in the forecast. In this study, we surveyed forecasters so that we can understand how the research community can support probabilistic forecasting in winter. We found that forecasters want new technologies that help them understand hard forecast situations, improve their communication skills, and that are useful to their local communities. Most forecasters feel comfortable interpreting probabilistic information, but sometimes are not sure how to communicate it to the public. We asked forecasters to share their recommendations for new weather models and tools and we provide an overview of how the research community can support probabilistic winter forecasting efforts.
Abstract
This study explores forecaster perceptions of emerging needs for probabilistic forecasting of winter weather hazards through a nationwide survey disseminated to National Weather Service (NWS) forecasters. Questions addressed four relevant thematic areas: 1) messaging timelines for specific hazards, 2) modeling needs, 3) current preparedness to interpret and communicate probabilistic winter information, and 4) winter forecasting tools. The results suggest that winter hazards are messaged on varying time scales that sometimes do not match the needs of stakeholders. Most participants responded favorably to the idea of incorporating new hazard-specific regional ensemble guidance to fill gaps in the winter forecasting process. Forecasters provided recommendations for ensemble run length and output frequencies that would be needed to capture individual winter hazards. Qualitatively, forecasters expressed more difficulties communicating, rather than interpreting, probabilistic winter hazard information. Differences in training and the need for social-science-driven practices were identified as a few of the drivers limiting forecasters’ ability to provide strategic winter messaging. In the future, forecasters are looking for new winter tools to address forecasting difficulties, enhance stakeholder partnerships, and also be useful to the local community. On the regional scale, an ensemble system could potentially accommodate these needs and provide specialized guidance on timing and sensitive/high-impact winter events.
Significance Statement
Probabilistic information gives forecasters the ability to see a range of potential outcomes so that they can know how much confidence to place in the forecast. In this study, we surveyed forecasters so that we can understand how the research community can support probabilistic forecasting in winter. We found that forecasters want new technologies that help them understand hard forecast situations, improve their communication skills, and that are useful to their local communities. Most forecasters feel comfortable interpreting probabilistic information, but sometimes are not sure how to communicate it to the public. We asked forecasters to share their recommendations for new weather models and tools and we provide an overview of how the research community can support probabilistic winter forecasting efforts.
Abstract
Because bow echoes are often associated with damaging wind, accurate prediction of their severity is important. Recent work by Mauri and Gallus showed that despite increased challenges in forecasting nocturnal bows due to an incomplete understanding of how elevated convection interacts with the nocturnal stable boundary layer, several near-storm environmental parameters worked well to distinguish between bow echoes not producing severe winds (NS), those only producing low-intensity severe winds [LS; 50–55 kt (1 kt ≈ 0.51 m s−1)], and those associated with high-intensity (HS; >70 kt) severe winds. The present study performs a similar comparison for daytime warm-season bow echoes examining the same 43 SPC mesoanalysis parameters for 158 events occurring from 2010 to 2018. Although low-level shear and the meridional component of the wind discriminate well for nocturnal bow severity, they do not significantly differ in daytime bows. CAPE parameters discriminate well between daytime NS events and severe ones, but not between LS and HS, differing from nocturnal events where they discriminate between HS and the other types. The 500–850-hPa layer lapse rate works better to differentiate daytime bow severity, whereas the 500–700-hPa layer works better at night. Composite parameters work well to differentiate between all three severity types for daytime bow echoes, just as they do for nighttime ones, with the derecho composite parameter performing especially well. Heidke skill scores indicate that both individual and pairs of parameters generally are not as skillful at predicting daytime bow echo wind severity as they are at predicting nocturnal bow wind severity.
Abstract
Because bow echoes are often associated with damaging wind, accurate prediction of their severity is important. Recent work by Mauri and Gallus showed that despite increased challenges in forecasting nocturnal bows due to an incomplete understanding of how elevated convection interacts with the nocturnal stable boundary layer, several near-storm environmental parameters worked well to distinguish between bow echoes not producing severe winds (NS), those only producing low-intensity severe winds [LS; 50–55 kt (1 kt ≈ 0.51 m s−1)], and those associated with high-intensity (HS; >70 kt) severe winds. The present study performs a similar comparison for daytime warm-season bow echoes examining the same 43 SPC mesoanalysis parameters for 158 events occurring from 2010 to 2018. Although low-level shear and the meridional component of the wind discriminate well for nocturnal bow severity, they do not significantly differ in daytime bows. CAPE parameters discriminate well between daytime NS events and severe ones, but not between LS and HS, differing from nocturnal events where they discriminate between HS and the other types. The 500–850-hPa layer lapse rate works better to differentiate daytime bow severity, whereas the 500–700-hPa layer works better at night. Composite parameters work well to differentiate between all three severity types for daytime bow echoes, just as they do for nighttime ones, with the derecho composite parameter performing especially well. Heidke skill scores indicate that both individual and pairs of parameters generally are not as skillful at predicting daytime bow echo wind severity as they are at predicting nocturnal bow wind severity.
Abstract
The mass concentration of fine particulate matter (PM2.5; diameters less than 2.5 μm) estimated from geostationary satellite aerosol optical depth (AOD) data can supplement the network of ground monitors with high temporal (hourly) resolution. Estimates of PM2.5 over the United States were derived from NOAA’s operational geostationary satellites’ Advanced Baseline Imager (ABI) AOD data using a geographically weighted regression with hourly and daily temporal resolution. Validation versus ground observations shows a mean bias of −21.4% and −15.3% for hourly and daily PM2.5 estimates, respectively, for concentrations ranging from 0 to 1000 μg m−3. Because satellites only observe AOD in the daytime, the relation between observed daytime PM2.5 and daily mean PM2.5 was evaluated using ground measurements; PM2.5 estimated from ABI AODs were also examined to study this relationship. The ground measurements show that daytime mean PM2.5 has good correlation (r > 0.8) with daily mean PM2.5 in most areas of the United States, but with pronounced differences in the western United States due to temporal variations caused by wildfire smoke; the relation between the daytime and daily PM2.5 estimated from the ABI AODs has a similar pattern. While daily or daytime estimated PM2.5 provides exposure information in the context of the PM2.5 standard (>35 μg m−3), the hourly estimates of PM2.5 used in nowcasting show promise for alerts and warnings of harmful air quality. The geostationary satellite based PM2.5 estimates inform the public of harmful air quality 10 times more than standard ground observations (1.8 versus 0.17 million people per hour).
Significance Statement
Fine particulate matter (PM2.5; diameters less than 2.5 μm) are generated from smoke, dust, and emissions from industrial, transportation, and other sectors. They are harmful to human health and even lead to premature mortality. Data from geostationary satellites can help estimate surface PM2.5 exposure by filling in gaps that are not covered by ground monitors. With this information, people can plan their outdoor activities accordingly. This study shows that availability of hourly PM2.5 observations covering the entire continental United States is more informative to the public about harmful exposure to pollution. On average, 1.8 million people per hour can be informed using satellite data compared to 0.17 million people per hour based on ground observations alone.
Abstract
The mass concentration of fine particulate matter (PM2.5; diameters less than 2.5 μm) estimated from geostationary satellite aerosol optical depth (AOD) data can supplement the network of ground monitors with high temporal (hourly) resolution. Estimates of PM2.5 over the United States were derived from NOAA’s operational geostationary satellites’ Advanced Baseline Imager (ABI) AOD data using a geographically weighted regression with hourly and daily temporal resolution. Validation versus ground observations shows a mean bias of −21.4% and −15.3% for hourly and daily PM2.5 estimates, respectively, for concentrations ranging from 0 to 1000 μg m−3. Because satellites only observe AOD in the daytime, the relation between observed daytime PM2.5 and daily mean PM2.5 was evaluated using ground measurements; PM2.5 estimated from ABI AODs were also examined to study this relationship. The ground measurements show that daytime mean PM2.5 has good correlation (r > 0.8) with daily mean PM2.5 in most areas of the United States, but with pronounced differences in the western United States due to temporal variations caused by wildfire smoke; the relation between the daytime and daily PM2.5 estimated from the ABI AODs has a similar pattern. While daily or daytime estimated PM2.5 provides exposure information in the context of the PM2.5 standard (>35 μg m−3), the hourly estimates of PM2.5 used in nowcasting show promise for alerts and warnings of harmful air quality. The geostationary satellite based PM2.5 estimates inform the public of harmful air quality 10 times more than standard ground observations (1.8 versus 0.17 million people per hour).
Significance Statement
Fine particulate matter (PM2.5; diameters less than 2.5 μm) are generated from smoke, dust, and emissions from industrial, transportation, and other sectors. They are harmful to human health and even lead to premature mortality. Data from geostationary satellites can help estimate surface PM2.5 exposure by filling in gaps that are not covered by ground monitors. With this information, people can plan their outdoor activities accordingly. This study shows that availability of hourly PM2.5 observations covering the entire continental United States is more informative to the public about harmful exposure to pollution. On average, 1.8 million people per hour can be informed using satellite data compared to 0.17 million people per hour based on ground observations alone.
Abstract
The USAF Weather (AFW) supports a number of military and U.S. government agencies by providing authoritative weather analysis and forecast products for any location globally, including soil moisture analyses. The long history of supporting soil moisture products and partnering with other U.S. government agencies led to the partnering between the U.S. Air Force (USAF) and NASA Goddard Space Flight Center, resulting in a merger of those organizations’ modeling systems, collaborative development of the Land Information System (LIS), and operational fielding of the system within the USAF 557th Weather Wing [557 WW; formerly, Headquarters Air Force Weather Agency (HQ AFWA)]. In 2009, the USAF implemented the NASA LIS and later made it the primary software system to generate global soil hydrology and energy budget products. The implementation of LIS delivered a significant upgrade over the existing Land Data Assimilation System (LDAS) the USAF operated, the Agriculture Meteorology (AGRMET) system. Implementation enabled the rapid integration of new LDAS technology into USAF operations, and led to a long-term NASA–USAF partnership resulting in continued development, integration, and implementation of new LIS capabilities. This paper documents both the history of the USAF Weather organization capabilities enabling the generation of soil moisture and other land surface analysis products, and describes the USAF–NASA partnership leading to the development of the merged LIS-AGRMET system. The article also presents a successful example of a mutually beneficial partnership that has enabled cutting-edge land analysis capabilities at the USAF, while transitioning NASA software and satellite data into USAF operations.
Abstract
The USAF Weather (AFW) supports a number of military and U.S. government agencies by providing authoritative weather analysis and forecast products for any location globally, including soil moisture analyses. The long history of supporting soil moisture products and partnering with other U.S. government agencies led to the partnering between the U.S. Air Force (USAF) and NASA Goddard Space Flight Center, resulting in a merger of those organizations’ modeling systems, collaborative development of the Land Information System (LIS), and operational fielding of the system within the USAF 557th Weather Wing [557 WW; formerly, Headquarters Air Force Weather Agency (HQ AFWA)]. In 2009, the USAF implemented the NASA LIS and later made it the primary software system to generate global soil hydrology and energy budget products. The implementation of LIS delivered a significant upgrade over the existing Land Data Assimilation System (LDAS) the USAF operated, the Agriculture Meteorology (AGRMET) system. Implementation enabled the rapid integration of new LDAS technology into USAF operations, and led to a long-term NASA–USAF partnership resulting in continued development, integration, and implementation of new LIS capabilities. This paper documents both the history of the USAF Weather organization capabilities enabling the generation of soil moisture and other land surface analysis products, and describes the USAF–NASA partnership leading to the development of the merged LIS-AGRMET system. The article also presents a successful example of a mutually beneficial partnership that has enabled cutting-edge land analysis capabilities at the USAF, while transitioning NASA software and satellite data into USAF operations.
Abstract
This project tested software capabilities and operational implications related to interoffice collaboration during NWS severe weather warning operations within a proposed paradigm, Forecasting A Continuum of Environmental Threats (FACETs). Current NWS policy of each forecast office issuing warnings for an exclusive area of responsibility may result in inconsistent messaging. In contrast, the FACETs paradigm, with object-based, moving probabilistic and deterministic hazard information, could provide seamless information across NWS County Warning Areas (CWAs). An experiment was conducted that allowed NWS forecasters to test new software that incorporates FACETs-based hazard information and potential concepts of operation to improve messaging consistency between adjacent WFOs. Experiment scenarios consisted of a variety of storm and office border interactions, fictional events requiring nowcasts, and directives that mimicked differing inter-WFO warning philosophies. Surveys and semi-structured interviews were conducted to gauge forecasters’ confidence and workload levels, and to discuss potential solutions for interoffice collaboration and software issues. We found that forecasters were able to adapt quickly to the new software and concepts and were comfortable with collaborating with their neighboring WFO in warning operations. Although forecasters felt the software’s collaboration tools enabled them to communicate in a timely manner, adding this collaboration increased their workload when compared to their workload during current warning operations.
Abstract
This project tested software capabilities and operational implications related to interoffice collaboration during NWS severe weather warning operations within a proposed paradigm, Forecasting A Continuum of Environmental Threats (FACETs). Current NWS policy of each forecast office issuing warnings for an exclusive area of responsibility may result in inconsistent messaging. In contrast, the FACETs paradigm, with object-based, moving probabilistic and deterministic hazard information, could provide seamless information across NWS County Warning Areas (CWAs). An experiment was conducted that allowed NWS forecasters to test new software that incorporates FACETs-based hazard information and potential concepts of operation to improve messaging consistency between adjacent WFOs. Experiment scenarios consisted of a variety of storm and office border interactions, fictional events requiring nowcasts, and directives that mimicked differing inter-WFO warning philosophies. Surveys and semi-structured interviews were conducted to gauge forecasters’ confidence and workload levels, and to discuss potential solutions for interoffice collaboration and software issues. We found that forecasters were able to adapt quickly to the new software and concepts and were comfortable with collaborating with their neighboring WFO in warning operations. Although forecasters felt the software’s collaboration tools enabled them to communicate in a timely manner, adding this collaboration increased their workload when compared to their workload during current warning operations.
Abstract
Accurate visibility prediction is imperative in the interests of human and environmental health. However, the existing numerical models for visibility prediction are characterized by low prediction accuracy and high computational cost. Thus, in this study, we predicted visibility using tree-based machine learning algorithms and numerical weather prediction data determined by the local data assimilation and prediction system (LDAPS) of the Korea Meteorological Administration. We then evaluated the accuracy of visibility prediction for Seoul, South Korea, through a comparative analysis using observed visibility from the automated synoptic observing system. The visibility predicted by machine learning algorithm was compared with the visibility predicted by LDAPS. The LDAPS data employed to construct the visibility prediction model were divided into learning, validation, and test sets. The optimal machine learning algorithm for visibility prediction was determined using the learning and validation sets. In this study, the extreme gradient boosting (XGB) algorithm showed the highest accuracy for visibility prediction. Comparative results using the test sets revealed lower prediction error and higher correlation coefficient for visibility predicted by the XGB algorithm (bias: −0.62 km, MAE: 2.04 km, RMSE: 2.94 km, and R: 0.88) than for that predicted by LDAPS (bias: −0.32 km, MAE: 4.66 km, RMSE: 6.48 km, and R: 0.40). Moreover, the mean equitable threat score (ETS) also indicated higher prediction accuracy for visibility predicted by the XGB algorithm (ETS: 0.5–0.6 for visibility ranges) than for that predicted by LDAPS (ETS: 0.1–0.2).
Abstract
Accurate visibility prediction is imperative in the interests of human and environmental health. However, the existing numerical models for visibility prediction are characterized by low prediction accuracy and high computational cost. Thus, in this study, we predicted visibility using tree-based machine learning algorithms and numerical weather prediction data determined by the local data assimilation and prediction system (LDAPS) of the Korea Meteorological Administration. We then evaluated the accuracy of visibility prediction for Seoul, South Korea, through a comparative analysis using observed visibility from the automated synoptic observing system. The visibility predicted by machine learning algorithm was compared with the visibility predicted by LDAPS. The LDAPS data employed to construct the visibility prediction model were divided into learning, validation, and test sets. The optimal machine learning algorithm for visibility prediction was determined using the learning and validation sets. In this study, the extreme gradient boosting (XGB) algorithm showed the highest accuracy for visibility prediction. Comparative results using the test sets revealed lower prediction error and higher correlation coefficient for visibility predicted by the XGB algorithm (bias: −0.62 km, MAE: 2.04 km, RMSE: 2.94 km, and R: 0.88) than for that predicted by LDAPS (bias: −0.32 km, MAE: 4.66 km, RMSE: 6.48 km, and R: 0.40). Moreover, the mean equitable threat score (ETS) also indicated higher prediction accuracy for visibility predicted by the XGB algorithm (ETS: 0.5–0.6 for visibility ranges) than for that predicted by LDAPS (ETS: 0.1–0.2).