Browse
Abstract
Rogue waves are stochastic, individual ocean surface waves that are disproportionately large compared to the background sea state. They present considerable risk to mariners and offshore structures especially when encountered in large seas. Current rogue wave forecasts are based on nonlinear processes quantified by the Benjamin Feir index (BFI). However, there is increasing evidence that the BFI has limited predictive power in the real ocean and that rogue waves are largely generated by bandwidth-controlled linear superposition. Recent studies have shown that the bandwidth parameter crest–trough correlation r shows the highest univariate correlation with rogue wave probability. We corroborate this result and demonstrate that r has the highest predictive power for rogue wave probability from the analysis of open ocean and coastal buoys in the northeast Pacific. This work further demonstrates that crest–trough correlation can be forecast by a regional WAVEWATCH III wave model with moderate accuracy. This result leads to the proposal of a novel empirical rogue wave risk assessment probability forecast based on r. Semilogarithmic fits between r and rogue wave probability were applied to generate the rogue wave probability forecast. A sample rogue wave probability forecast is presented for a large storm 21–22 October 2021.
Significance Statement
Rogue waves pose a considerable threat to ships and offshore structures. The rare and unexpected nature of rogue wave makes predicting them an ongoing and challenging goal. Recent work based on an extensive dataset of waves has suggested that the wave parameter called the crest–trough correlation shows the highest correlation with rogue wave probability. Our work demonstrates that crest–trough correlation can be reasonably well forecast by standard wave models. This suggests that current operational wave models can support rogue wave prediction models based on crest–trough correlation for improved rogue wave risk evaluation.
Abstract
Rogue waves are stochastic, individual ocean surface waves that are disproportionately large compared to the background sea state. They present considerable risk to mariners and offshore structures especially when encountered in large seas. Current rogue wave forecasts are based on nonlinear processes quantified by the Benjamin Feir index (BFI). However, there is increasing evidence that the BFI has limited predictive power in the real ocean and that rogue waves are largely generated by bandwidth-controlled linear superposition. Recent studies have shown that the bandwidth parameter crest–trough correlation r shows the highest univariate correlation with rogue wave probability. We corroborate this result and demonstrate that r has the highest predictive power for rogue wave probability from the analysis of open ocean and coastal buoys in the northeast Pacific. This work further demonstrates that crest–trough correlation can be forecast by a regional WAVEWATCH III wave model with moderate accuracy. This result leads to the proposal of a novel empirical rogue wave risk assessment probability forecast based on r. Semilogarithmic fits between r and rogue wave probability were applied to generate the rogue wave probability forecast. A sample rogue wave probability forecast is presented for a large storm 21–22 October 2021.
Significance Statement
Rogue waves pose a considerable threat to ships and offshore structures. The rare and unexpected nature of rogue wave makes predicting them an ongoing and challenging goal. Recent work based on an extensive dataset of waves has suggested that the wave parameter called the crest–trough correlation shows the highest correlation with rogue wave probability. Our work demonstrates that crest–trough correlation can be reasonably well forecast by standard wave models. This suggests that current operational wave models can support rogue wave prediction models based on crest–trough correlation for improved rogue wave risk evaluation.
Abstract
The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.
Significance Statement
The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.
Abstract
The Hurricane Forecast Improvement Project (HFIP; renamed the “Hurricane Forecast Improvement Program” in 2017) was established by the U.S. National Oceanic and Atmospheric Administration (NOAA) in 2007 with a goal of improving tropical cyclone (TC) track and intensity predictions. A major focus of HFIP has been to increase the quality of guidance products for these parameters that are available to forecasters at the National Weather Service National Hurricane Center (NWS/NHC). One HFIP effort involved the demonstration of an operational decision process, named Stream 1.5, in which promising experimental versions of numerical weather prediction models were selected for TC forecast guidance. The selection occurred every year from 2010 to 2014 in the period preceding the hurricane season (defined as August–October), and was based on an extensive verification exercise of retrospective TC forecasts from candidate experimental models run over previous hurricane seasons. As part of this process, user-responsive verification questions were identified via discussions between NHC staff and forecast verification experts, with additional questions considered each year. A suite of statistically meaningful verification approaches consisting of traditional and innovative methods was developed to respond to these questions. Two examples of the application of the Stream 1.5 evaluations are presented, and the benefits of this approach are discussed. These benefits include the ability to provide information to forecasters and others that is relevant for their decision-making processes, via the selection of models that meet forecast quality standards and are meaningful for demonstration to forecasters in the subsequent hurricane season; clarification of user-responsive strengths and weaknesses of the selected models; and identification of paths to model improvement.
Significance Statement
The Hurricane Forecast Improvement Project (HFIP) tropical cyclone (TC) forecast evaluation effort led to innovations in TC predictions as well as new capabilities to provide more meaningful and comprehensive information about model performance to forecast users. Such an effort—to clearly specify the needs of forecasters and clarify how forecast improvements should be measured in a “user-oriented” framework—is rare. This project provides a template for one approach to achieving that goal.
Abstract
Extreme fire weather and fire behavior occurred during the New Year’s Eve period 30–31 December 2019 in southeast New South Wales, Australia. Fire progressed rapidly during the late evening and early morning periods, and significant extreme pyrocumulonimbus behavior developed, sometimes repeatedly in the same area. This occurred within a broader context of an unprecedented fire season in eastern Australia. Several aspects of the synoptic and mesoscale meteorology are examined, to identify contributions to fire behavior during this period. The passage of a cold front through the region was a key factor in the event, but other processes contributed to the severity of fire weather. Additional important features during this period included the movement of a negatively tilted upper-tropospheric trough, the interaction of the front with topography, and the occurrence of low-level overnight jets and of horizontal boundary layer rolls in the vicinity of the fireground.
Significance Statement
Wildfires and the weather that promotes their ignition and spread are a threat to communities and natural values globally, even in fire-adapted landscapes such as the western United States and Australia. In particular, savanna in the north of Australia regularly burns during the dry season while forest and grassland in the south burn episodically, mostly during the summer. Here, we examine the weather associated with destructive fires that occurred in southeast New South Wales, Australia, in late 2019. Weather and climate factors at several scales interacted to contribute to fire activity that was unusually dangerous. For meteorologists and emergency managers, case studies such as this are valuable to highlight conditions that may lead to future similar events. This case study also identified areas where improvements in fire weather service can be made, including the incorporation of more detailed weather information into models of fire behavior.
Abstract
Extreme fire weather and fire behavior occurred during the New Year’s Eve period 30–31 December 2019 in southeast New South Wales, Australia. Fire progressed rapidly during the late evening and early morning periods, and significant extreme pyrocumulonimbus behavior developed, sometimes repeatedly in the same area. This occurred within a broader context of an unprecedented fire season in eastern Australia. Several aspects of the synoptic and mesoscale meteorology are examined, to identify contributions to fire behavior during this period. The passage of a cold front through the region was a key factor in the event, but other processes contributed to the severity of fire weather. Additional important features during this period included the movement of a negatively tilted upper-tropospheric trough, the interaction of the front with topography, and the occurrence of low-level overnight jets and of horizontal boundary layer rolls in the vicinity of the fireground.
Significance Statement
Wildfires and the weather that promotes their ignition and spread are a threat to communities and natural values globally, even in fire-adapted landscapes such as the western United States and Australia. In particular, savanna in the north of Australia regularly burns during the dry season while forest and grassland in the south burn episodically, mostly during the summer. Here, we examine the weather associated with destructive fires that occurred in southeast New South Wales, Australia, in late 2019. Weather and climate factors at several scales interacted to contribute to fire activity that was unusually dangerous. For meteorologists and emergency managers, case studies such as this are valuable to highlight conditions that may lead to future similar events. This case study also identified areas where improvements in fire weather service can be made, including the incorporation of more detailed weather information into models of fire behavior.
Abstract
Environments associated with severe hailstorms, compared to those of tornadoes, are often less apparent to forecasters. Understanding has evolved considerably in recent years; namely, that weak low-level shear and sufficient convective available potential energy (CAPE) above the freezing level is most favorable for large hail. However, this understanding comes only from examining the mean characteristics of large hail environments. How much variety exists within the kinematic and thermodynamic environments of large hail? Is there a balance between shear and CAPE analogous to that noted with tornadoes? We address these questions to move toward a more complete conceptual model. In this study, we investigate the environments of 92 323 hail reports (both severe and nonsevere) using ERA5 modeled proximity soundings. By employing a self-organizing map algorithm and subsetting these environments by a multitude of characteristics, we find that the conditions leading to large hail are highly variable, but three primary patterns emerge. First, hail growth depends on a favorable balance of CAPE, wind shear, and relative humidity, such that accounting for entrainment is important in parameter-based hail prediction. Second, hail growth is thwarted by strong low-level storm-relative winds, unless CAPE below the hail growth zone is weak. Finally, the maximum hail size possible in a given environment may be predictable by the depth of buoyancy, rather than CAPE itself.
Abstract
Environments associated with severe hailstorms, compared to those of tornadoes, are often less apparent to forecasters. Understanding has evolved considerably in recent years; namely, that weak low-level shear and sufficient convective available potential energy (CAPE) above the freezing level is most favorable for large hail. However, this understanding comes only from examining the mean characteristics of large hail environments. How much variety exists within the kinematic and thermodynamic environments of large hail? Is there a balance between shear and CAPE analogous to that noted with tornadoes? We address these questions to move toward a more complete conceptual model. In this study, we investigate the environments of 92 323 hail reports (both severe and nonsevere) using ERA5 modeled proximity soundings. By employing a self-organizing map algorithm and subsetting these environments by a multitude of characteristics, we find that the conditions leading to large hail are highly variable, but three primary patterns emerge. First, hail growth depends on a favorable balance of CAPE, wind shear, and relative humidity, such that accounting for entrainment is important in parameter-based hail prediction. Second, hail growth is thwarted by strong low-level storm-relative winds, unless CAPE below the hail growth zone is weak. Finally, the maximum hail size possible in a given environment may be predictable by the depth of buoyancy, rather than CAPE itself.
Abstract
Realistic initialization of the land surface is important to produce accurate NWP forecasts. Therefore, making use of available observations is essential when estimating the surface state. In this work, sequential land surface data assimilation of soil variables is replaced with an offline cycling method. To obtain the best possible initial state for the lower boundary of the NWP system, the land surface model is rerun between forecasts with an analyzed atmospheric forcing. We found a relative reduction of 2-m temperature root-mean-square errors and mean errors of 6% and 12%, respectively, and 4.5% and 11% for 2-m specific humidity. During a convective event, the system was able to produce useful (fractions skill score greater than the uniform forecast) forecasts [above 30 mm (12 h)−1] down to a 100-km length scale where the reference failed to do so below 200 km. The different precipitation forcing caused differences in soil moisture fields that persisted for several weeks and consequently impacted the surface fluxes of heat and moisture and the forecasts of screen level parameters. The experiments also indicate diurnal- and weather-dependent variations of the forecast errors that give valuable insight on the role of initial land surface conditions and the land–atmosphere interactions in southern Scandinavia.
Abstract
Realistic initialization of the land surface is important to produce accurate NWP forecasts. Therefore, making use of available observations is essential when estimating the surface state. In this work, sequential land surface data assimilation of soil variables is replaced with an offline cycling method. To obtain the best possible initial state for the lower boundary of the NWP system, the land surface model is rerun between forecasts with an analyzed atmospheric forcing. We found a relative reduction of 2-m temperature root-mean-square errors and mean errors of 6% and 12%, respectively, and 4.5% and 11% for 2-m specific humidity. During a convective event, the system was able to produce useful (fractions skill score greater than the uniform forecast) forecasts [above 30 mm (12 h)−1] down to a 100-km length scale where the reference failed to do so below 200 km. The different precipitation forcing caused differences in soil moisture fields that persisted for several weeks and consequently impacted the surface fluxes of heat and moisture and the forecasts of screen level parameters. The experiments also indicate diurnal- and weather-dependent variations of the forecast errors that give valuable insight on the role of initial land surface conditions and the land–atmosphere interactions in southern Scandinavia.
Abstract
This study assesses the forecast skill of the Canadian Seasonal to Interannual Prediction System (CanSIPS), version 2, in predicting Arctic sea ice extent on both the pan-Arctic and regional scales. In addition, the forecast skill is compared to that of CanSIPS, version 1. Overall, there is a net increase of forecast skill when considering detrended data due to the changes made in the development of CanSIPSv2. The most notable improvements are for forecasts of late summer and autumn target months that have been initialized in the months of April and May that, in previous studies, have been associated with the spring predictability barrier. By comparison of the skills of CanSIPSv1 and CanSIPSv2 to that of an intermediate version of CanSIPS, CanSIPSv1b, we can attribute skill differences between CanSIPSv1 and CanSIPSv2 to two main sources. First, an improved initialization procedure for sea ice initial conditions markedly improves forecast skill on the pan-Arctic scale as well as regionally in the central Arctic, Laptev Sea, Sea of Okhotsk, and Barents Sea. This conclusion is further supported by analysis of the predictive skill of the sea ice volume initialization field. Second, the change in model combination from CanSIPSv1 to CanSIPSv2 (exchanging the constituent CanCM3 model for GEM-NEMO) improves forecast skill in the Bering, Kara, Chukchi, Beaufort, East Siberian, Barents, and the Greenland–Iceland–Norwegian (GIN) Seas. In Hudson and Baffin Bay, as well as the Labrador Sea, there is limited and unsystematic improvement in forecasts of CanSIPSv2 as compared to CanSIPSv1.
Abstract
This study assesses the forecast skill of the Canadian Seasonal to Interannual Prediction System (CanSIPS), version 2, in predicting Arctic sea ice extent on both the pan-Arctic and regional scales. In addition, the forecast skill is compared to that of CanSIPS, version 1. Overall, there is a net increase of forecast skill when considering detrended data due to the changes made in the development of CanSIPSv2. The most notable improvements are for forecasts of late summer and autumn target months that have been initialized in the months of April and May that, in previous studies, have been associated with the spring predictability barrier. By comparison of the skills of CanSIPSv1 and CanSIPSv2 to that of an intermediate version of CanSIPS, CanSIPSv1b, we can attribute skill differences between CanSIPSv1 and CanSIPSv2 to two main sources. First, an improved initialization procedure for sea ice initial conditions markedly improves forecast skill on the pan-Arctic scale as well as regionally in the central Arctic, Laptev Sea, Sea of Okhotsk, and Barents Sea. This conclusion is further supported by analysis of the predictive skill of the sea ice volume initialization field. Second, the change in model combination from CanSIPSv1 to CanSIPSv2 (exchanging the constituent CanCM3 model for GEM-NEMO) improves forecast skill in the Bering, Kara, Chukchi, Beaufort, East Siberian, Barents, and the Greenland–Iceland–Norwegian (GIN) Seas. In Hudson and Baffin Bay, as well as the Labrador Sea, there is limited and unsystematic improvement in forecasts of CanSIPSv2 as compared to CanSIPSv1.
Abstract
Realistic ocean initial conditions are essential for coupled hurricane forecasts. This study focuses on the impact of assimilating high-resolution ocean observations for initialization of the Modular Ocean Model (MOM6) in a coupled configuration with the Hurricane Analysis and Forecast System (HAFS). Based on the Joint Effort for Data Assimilation Integration (JEDI) framework, numerical experiments were performed for the Hurricane Isaias (2020) case, a category-1 hurricane, with use of underwater glider datasets and satellite observations. Assimilation of ocean glider data together with satellite observations provides opportunity to further advance understanding of ocean conditions and air–sea interactions in coupled model initialization and hurricane forecast systems. This comprehensive data assimilation approach has led to a more accurate prediction of the salinity-induced barrier layer thickness that suppresses vertical mixing and sea surface temperature cooling during the storm. Increased barrier layer thickness enhances ocean enthalpy flux into the lower atmosphere and potentially increases tropical cyclone intensity. Assimilation of satellite observations demonstrates improvement in Hurricane Isaias’s intensity forecast. Assimilating glider observations with broad spatial and temporal coverage along Isaias’s track in addition to satellite observations further increase Isaias’s intensity forecast. Overall, this case study demonstrates the importance of assimilating comprehensive marine observations to a more robust ocean and hurricane forecast under a unified JEDI–HAFS hurricane forecast system.
Significance Statement
This is the first comprehensive study of marine observations’ impact on hurricane forecast using marine JEDI. This study found that assimilating satellite observations increases upper-ocean stratification during the prestorm period of Isaias. Assimilating preprocessed observations from six gliders increases salinity-induced upper ocean barrier layer thickness, which reduces sea surface temperature cooling and increases enthalpy flux during the storm. This mechanism eventually enhances hurricane intensity forecast. Overall, this study demonstrates a positive impact of assimilating comprehensive marine observations to a successful ocean and hurricane forecast under a unified JEDI–HAFS hurricane forecast system.
Abstract
Realistic ocean initial conditions are essential for coupled hurricane forecasts. This study focuses on the impact of assimilating high-resolution ocean observations for initialization of the Modular Ocean Model (MOM6) in a coupled configuration with the Hurricane Analysis and Forecast System (HAFS). Based on the Joint Effort for Data Assimilation Integration (JEDI) framework, numerical experiments were performed for the Hurricane Isaias (2020) case, a category-1 hurricane, with use of underwater glider datasets and satellite observations. Assimilation of ocean glider data together with satellite observations provides opportunity to further advance understanding of ocean conditions and air–sea interactions in coupled model initialization and hurricane forecast systems. This comprehensive data assimilation approach has led to a more accurate prediction of the salinity-induced barrier layer thickness that suppresses vertical mixing and sea surface temperature cooling during the storm. Increased barrier layer thickness enhances ocean enthalpy flux into the lower atmosphere and potentially increases tropical cyclone intensity. Assimilation of satellite observations demonstrates improvement in Hurricane Isaias’s intensity forecast. Assimilating glider observations with broad spatial and temporal coverage along Isaias’s track in addition to satellite observations further increase Isaias’s intensity forecast. Overall, this case study demonstrates the importance of assimilating comprehensive marine observations to a more robust ocean and hurricane forecast under a unified JEDI–HAFS hurricane forecast system.
Significance Statement
This is the first comprehensive study of marine observations’ impact on hurricane forecast using marine JEDI. This study found that assimilating satellite observations increases upper-ocean stratification during the prestorm period of Isaias. Assimilating preprocessed observations from six gliders increases salinity-induced upper ocean barrier layer thickness, which reduces sea surface temperature cooling and increases enthalpy flux during the storm. This mechanism eventually enhances hurricane intensity forecast. Overall, this study demonstrates a positive impact of assimilating comprehensive marine observations to a successful ocean and hurricane forecast under a unified JEDI–HAFS hurricane forecast system.
Abstract
We developed a storm surge ensemble prediction system (EPS) for lagoons and transitional environments. Lagoons are often threatened by storm surge events with consequent risks for human life and economic losses. The uncertainties connected with a classic deterministic forecast are many, thus, an ensemble forecast system is required to properly consider them and inform the end-user community accordingly. The technological resources now available allow us to investigate the possibility of operational ensemble forecasting systems that will become increasingly essential for coastal management. We show the advantages and limitations of an EPS applied to a lagoon, using a very high-resolution unstructured grid finite element model and 45 EPS members. For five recent storm surge events, the EPS generally improves the forecast skill on the third forecast day compared to just one deterministic forecast, while they are similar in the first two days. A weighting system is implemented to compute an improved ensemble mean. The uncertainties regarding sea level due to meteorological forcing, river runoff, initial boundaries, and lateral boundaries are evaluated for a special case in the northern Adriatic Sea, and the different forecasts are used to compose the EPS members. We conclude that the largest uncertainty is in the initial and lateral boundary fields at different time and space scales, including the tidal components.
Significance Statement
Storm surges are extreme sea level events that may threaten densely populated coastal areas. The purpose of this work is to improve the extreme sea level forecast for transitional areas with the understanding of what are the most important forcing generating uncertainties and find a technique to reach a reliable sea level forecast. This is achieved by implementing an ensemble prediction system running 45 members for each event considered. Results show that initial and lateral boundary conditions provide most of the uncertainty, including the tidal components. The weighting system applied to find the ensemble mean improves the forecast skill on the third forecast day while it is comparable with the deterministic forecast in the first two days.
Abstract
We developed a storm surge ensemble prediction system (EPS) for lagoons and transitional environments. Lagoons are often threatened by storm surge events with consequent risks for human life and economic losses. The uncertainties connected with a classic deterministic forecast are many, thus, an ensemble forecast system is required to properly consider them and inform the end-user community accordingly. The technological resources now available allow us to investigate the possibility of operational ensemble forecasting systems that will become increasingly essential for coastal management. We show the advantages and limitations of an EPS applied to a lagoon, using a very high-resolution unstructured grid finite element model and 45 EPS members. For five recent storm surge events, the EPS generally improves the forecast skill on the third forecast day compared to just one deterministic forecast, while they are similar in the first two days. A weighting system is implemented to compute an improved ensemble mean. The uncertainties regarding sea level due to meteorological forcing, river runoff, initial boundaries, and lateral boundaries are evaluated for a special case in the northern Adriatic Sea, and the different forecasts are used to compose the EPS members. We conclude that the largest uncertainty is in the initial and lateral boundary fields at different time and space scales, including the tidal components.
Significance Statement
Storm surges are extreme sea level events that may threaten densely populated coastal areas. The purpose of this work is to improve the extreme sea level forecast for transitional areas with the understanding of what are the most important forcing generating uncertainties and find a technique to reach a reliable sea level forecast. This is achieved by implementing an ensemble prediction system running 45 members for each event considered. Results show that initial and lateral boundary conditions provide most of the uncertainty, including the tidal components. The weighting system applied to find the ensemble mean improves the forecast skill on the third forecast day while it is comparable with the deterministic forecast in the first two days.
Abstract
The assimilation of two surface-sensitive channels of the AMSU-A instruments on board the NOAA-15/NOAA-18/NOAA-19 and MetOp-A/MetOp-B satellites over land was achieved in the China Meteorological Administration Global Forecast System (CMA_GFS). The land surface emissivity was calculated by 1) the window channel retrieval method and 2) the Tool to Estimate Land Surface Emissivities at Microwave frequencies (TELSEM2). Quality controls for these satellite microwave observations over land were conducted. The predictors and regression coefficients used for oceanic satellite data were retained during the bias correction over land and found to perform well. Three batch experiments were implemented in CMA_GFS with 4D-Var: 1) assimilating only the default data, and adding the above data over land with land surface emissivity obtained from 2) TELSEM2 and 3) the window channel retrieval method. The results indicated that the window channel retrieval method can better reduce the departure between the observed and simulated brightness temperature. Over most land types, the positive impacts of this method exceed those of TELSEM2. Both TELSEM2 and the window channel retrieval method improve the humidity analysis near the ground, as well as the forecast capability globally, particularly in those regions where the land coverage is greater, such as in the Northern Hemisphere. The data utilization of the two surface-sensitive channels increase by 6% and 12%, respectively, and the additional data every 6 h can cover most land, where there was no surface-sensitive data assimilated before. This study marks the beginning of near-surface channel assimilation over land in CMA_GFS and represents a breakthrough in the assimilation of other surface-sensitive channels in other satellite instruments.
Significance Statement
Surface-sensitive microwave channels are difficult to assimilate in NWP due to the lack of both direct measurement and appropriate modeling for instantaneous land surface emissivity. This paper discusses a method that improves the surface emissivity estimates, which has allowed the utilization of surface-sensitive microwave channels in CMA_GFS. Those capabilities have resulted in better data utilization, improved forecasts of temperature, geopotential height, and winds in the Northern Hemisphere at 3–7 days, and represent an incremental and important improvement to CMA_GFS.
Abstract
The assimilation of two surface-sensitive channels of the AMSU-A instruments on board the NOAA-15/NOAA-18/NOAA-19 and MetOp-A/MetOp-B satellites over land was achieved in the China Meteorological Administration Global Forecast System (CMA_GFS). The land surface emissivity was calculated by 1) the window channel retrieval method and 2) the Tool to Estimate Land Surface Emissivities at Microwave frequencies (TELSEM2). Quality controls for these satellite microwave observations over land were conducted. The predictors and regression coefficients used for oceanic satellite data were retained during the bias correction over land and found to perform well. Three batch experiments were implemented in CMA_GFS with 4D-Var: 1) assimilating only the default data, and adding the above data over land with land surface emissivity obtained from 2) TELSEM2 and 3) the window channel retrieval method. The results indicated that the window channel retrieval method can better reduce the departure between the observed and simulated brightness temperature. Over most land types, the positive impacts of this method exceed those of TELSEM2. Both TELSEM2 and the window channel retrieval method improve the humidity analysis near the ground, as well as the forecast capability globally, particularly in those regions where the land coverage is greater, such as in the Northern Hemisphere. The data utilization of the two surface-sensitive channels increase by 6% and 12%, respectively, and the additional data every 6 h can cover most land, where there was no surface-sensitive data assimilated before. This study marks the beginning of near-surface channel assimilation over land in CMA_GFS and represents a breakthrough in the assimilation of other surface-sensitive channels in other satellite instruments.
Significance Statement
Surface-sensitive microwave channels are difficult to assimilate in NWP due to the lack of both direct measurement and appropriate modeling for instantaneous land surface emissivity. This paper discusses a method that improves the surface emissivity estimates, which has allowed the utilization of surface-sensitive microwave channels in CMA_GFS. Those capabilities have resulted in better data utilization, improved forecasts of temperature, geopotential height, and winds in the Northern Hemisphere at 3–7 days, and represent an incremental and important improvement to CMA_GFS.
Abstract
This study evaluates the representation of tropical cyclone precipitation (TCP) in reforecasts from the Subseasonal to Seasonal (S2S) Prediction Project. The global distribution of precipitation in S2S models shows relevant biases in the multimodel mean ensemble that are characterized by wet biases in total precipitation and TCP, except for the Atlantic. The TCP biases can contribute more than 50% of the total precipitation biases in basins such as the southern Indian Ocean and South Pacific. The magnitude and spatial pattern of these biases exhibit little variation with lead time. The origins of TCP biases can be attributed to biases in the frequency of tropical cyclone occurrence. The S2S models simulate too few TCs in the Atlantic and western North Pacific and too many TCs in the Southern Hemisphere and eastern North Pacific. At the storm scale, the average peak precipitation near the storm center is lower in the models than observations due to a too high proportion of weak TCs. However, this bias is offset in some models by higher than observed precipitation rates at larger radii (300–500 km). An analysis of the mean TCP for each TC at each grid point reveals an overestimation of TCP rates, particularly in the near-equatorial Indian and western Pacific Oceans. These findings suggest that the simulation of TC occurrence and the storm-scale precipitation require better representation in order to reduce TCP biases and enhance the subseasonal prediction skill of mean and extreme total precipitation.
Abstract
This study evaluates the representation of tropical cyclone precipitation (TCP) in reforecasts from the Subseasonal to Seasonal (S2S) Prediction Project. The global distribution of precipitation in S2S models shows relevant biases in the multimodel mean ensemble that are characterized by wet biases in total precipitation and TCP, except for the Atlantic. The TCP biases can contribute more than 50% of the total precipitation biases in basins such as the southern Indian Ocean and South Pacific. The magnitude and spatial pattern of these biases exhibit little variation with lead time. The origins of TCP biases can be attributed to biases in the frequency of tropical cyclone occurrence. The S2S models simulate too few TCs in the Atlantic and western North Pacific and too many TCs in the Southern Hemisphere and eastern North Pacific. At the storm scale, the average peak precipitation near the storm center is lower in the models than observations due to a too high proportion of weak TCs. However, this bias is offset in some models by higher than observed precipitation rates at larger radii (300–500 km). An analysis of the mean TCP for each TC at each grid point reveals an overestimation of TCP rates, particularly in the near-equatorial Indian and western Pacific Oceans. These findings suggest that the simulation of TC occurrence and the storm-scale precipitation require better representation in order to reduce TCP biases and enhance the subseasonal prediction skill of mean and extreme total precipitation.