Search Results
You are looking at 1 - 9 of 9 items for
- Author or Editor: B. G. Hong x
- Refine by Access: All Content x
Abstract
Variability of sea level on the offshore side of the Gulf Stream has been estimated with a wind-forced numerical model. The difference in sea level between the model and coastal tide gauges therefore provides an estimate of variability of the Gulf Stream. These results can be compared with direct measurements of transport; the agreement is surprisingly good. Transport estimates are then made for sections offshore of four major tide stations along the U.S. East Coast. When data since World War II are used, the spectrum of sea level at the coast appears to peak at periods of ∼150–250 mo. The difference signal (ocean minus coast), however, which the authors interpret as transport variability, has a weakly red spectrum. Power decreases at somewhat less than f −1 at periods just less than ∼500 months but decreases strongly at periods less than ∼150 months. The low-frequency variability arises primarily from the influx of open ocean Rossby waves. The large variance at low frequencies suggests that measurements of the transport of western boundary currents do not have many degrees of freedom; measurements made many years apart may vary substantially because of this localized variability. Sea level at the coast is coherent over long distances, but the incoming Rossby wave radiation from the open ocean has a relatively short north–south scale. These results emphasize that transport measured at one location along the coast may be incoherent with transport at locations only ∼200 km away. As a result, measurements at one location will in general not be representative of transport along the entire coast.
Abstract
Variability of sea level on the offshore side of the Gulf Stream has been estimated with a wind-forced numerical model. The difference in sea level between the model and coastal tide gauges therefore provides an estimate of variability of the Gulf Stream. These results can be compared with direct measurements of transport; the agreement is surprisingly good. Transport estimates are then made for sections offshore of four major tide stations along the U.S. East Coast. When data since World War II are used, the spectrum of sea level at the coast appears to peak at periods of ∼150–250 mo. The difference signal (ocean minus coast), however, which the authors interpret as transport variability, has a weakly red spectrum. Power decreases at somewhat less than f −1 at periods just less than ∼500 months but decreases strongly at periods less than ∼150 months. The low-frequency variability arises primarily from the influx of open ocean Rossby waves. The large variance at low frequencies suggests that measurements of the transport of western boundary currents do not have many degrees of freedom; measurements made many years apart may vary substantially because of this localized variability. Sea level at the coast is coherent over long distances, but the incoming Rossby wave radiation from the open ocean has a relatively short north–south scale. These results emphasize that transport measured at one location along the coast may be incoherent with transport at locations only ∼200 km away. As a result, measurements at one location will in general not be representative of transport along the entire coast.
Abstract
The Bermuda tide gauge record extends back to the early 1930s. That sea level fluctuations there are highly coherent with dynamic height from hydrographic data has two interesting implications. First, it should contain information about the low-frequency circulation of the Atlantic. Furthermore, because dynamic height contains information on heat storage, it might, on the limited timescales accessible in the record, also contain clues about climate.
A simple model of wind forcing of the Atlantic from the African coast to Bermuda uses the Levitus mean density data to estimate the long Rossby wave speed as a function of longitude. Sea level and thermocline variability estimated this way are in remarkably good agreement with observations at periods of more than a few years duration. The peak-to-peak sea level signal is ∼18 cm, which is nearly 25% of the slope across the Gulf Stream at this latitude. The model results suggest that the variability is largest somewhat to the east of Bermuda; fluctuations of ∼10 cm extend as far east as ∼35°W.
One surprising result is that at the longest periods in the COADS data, the wind curl has a double-peak structure in longitude. That is, there is a significant amount of power on the eastern side of the ocean as well as near Bermuda. Therefore, it is essential to use the full horizontal resolution of the wind data; using the mean curl across the Atlantic turns out not to be a good way to estimate thermocline variability. One might wonder if the wind data are reliable at these long periods were it not for the good agreement between the results and observed sea level. The power in wind variability increases out to ∼500 months, although with little statistical reliability. Sea level variability however, appears to peak at somewhat shorter periods. Although it is pushing the resolution of the data, this result is a limitation imposed by the basin width scale. The power in the model ocean's response to wind forcing is nearly an order of magnitude larger during the first half of the record (1952–69) than during the second (1970–86).
It is likely that significant changes in buoyancy forcing by the atmosphere are coherent with changes in wind. Nevertheless, these results suggest that the variability in sea level—and so in deep temperature—can perhaps be accounted for without invoking changes in stored heat of the deep ocean.
Abstract
The Bermuda tide gauge record extends back to the early 1930s. That sea level fluctuations there are highly coherent with dynamic height from hydrographic data has two interesting implications. First, it should contain information about the low-frequency circulation of the Atlantic. Furthermore, because dynamic height contains information on heat storage, it might, on the limited timescales accessible in the record, also contain clues about climate.
A simple model of wind forcing of the Atlantic from the African coast to Bermuda uses the Levitus mean density data to estimate the long Rossby wave speed as a function of longitude. Sea level and thermocline variability estimated this way are in remarkably good agreement with observations at periods of more than a few years duration. The peak-to-peak sea level signal is ∼18 cm, which is nearly 25% of the slope across the Gulf Stream at this latitude. The model results suggest that the variability is largest somewhat to the east of Bermuda; fluctuations of ∼10 cm extend as far east as ∼35°W.
One surprising result is that at the longest periods in the COADS data, the wind curl has a double-peak structure in longitude. That is, there is a significant amount of power on the eastern side of the ocean as well as near Bermuda. Therefore, it is essential to use the full horizontal resolution of the wind data; using the mean curl across the Atlantic turns out not to be a good way to estimate thermocline variability. One might wonder if the wind data are reliable at these long periods were it not for the good agreement between the results and observed sea level. The power in wind variability increases out to ∼500 months, although with little statistical reliability. Sea level variability however, appears to peak at somewhat shorter periods. Although it is pushing the resolution of the data, this result is a limitation imposed by the basin width scale. The power in the model ocean's response to wind forcing is nearly an order of magnitude larger during the first half of the record (1952–69) than during the second (1970–86).
It is likely that significant changes in buoyancy forcing by the atmosphere are coherent with changes in wind. Nevertheless, these results suggest that the variability in sea level—and so in deep temperature—can perhaps be accounted for without invoking changes in stored heat of the deep ocean.
Abstract
In the central North Atlantic Ocean there are large decadal-scale fluctuations of sea level and of the depth of the thermocline. This variability can be explained by low-frequency Rossby waves forced by wind. The authors have used a simple model of ocean fluctuations driven by the COADS wind-stress curl and find that their model results and hydrographic data, to the limited extent they can be compared meaningfully, agree rather well.
The variation in wind curl over the ocean leads to surprisingly large north–south variability in the computed oceanic response over the subtropical gyre. The peak-to-peak sea level differences are as great as 20 cm and persist for many years. These large variations could induce major errors in calculations of the mean ocean flow when hydrographic sections from many years are combined. It appears possible, however, to correct for these wind-induced effects to allow the lower-frequency signals to be determined.
The westernmost edge of our oceanic calculations is at the offshore side of the Gulf Stream. These fluctuations show changes in the north–south slope of sea level that imply long intervals of transport into or out of the Gulf Stream and provide suggestive evidence for low-frequency variability in coastal sea level and in the transport of the stream. This variability has been found previously in more complex ocean models, but the simplicity of these calculations may make the forcing mechanism and the ocean’s response easier to understand.
Abstract
In the central North Atlantic Ocean there are large decadal-scale fluctuations of sea level and of the depth of the thermocline. This variability can be explained by low-frequency Rossby waves forced by wind. The authors have used a simple model of ocean fluctuations driven by the COADS wind-stress curl and find that their model results and hydrographic data, to the limited extent they can be compared meaningfully, agree rather well.
The variation in wind curl over the ocean leads to surprisingly large north–south variability in the computed oceanic response over the subtropical gyre. The peak-to-peak sea level differences are as great as 20 cm and persist for many years. These large variations could induce major errors in calculations of the mean ocean flow when hydrographic sections from many years are combined. It appears possible, however, to correct for these wind-induced effects to allow the lower-frequency signals to be determined.
The westernmost edge of our oceanic calculations is at the offshore side of the Gulf Stream. These fluctuations show changes in the north–south slope of sea level that imply long intervals of transport into or out of the Gulf Stream and provide suggestive evidence for low-frequency variability in coastal sea level and in the transport of the stream. This variability has been found previously in more complex ocean models, but the simplicity of these calculations may make the forcing mechanism and the ocean’s response easier to understand.
Abstract
One of the puzzling features of sea level on the east coast of the United States is the decedal-scale variability;the fluctuations are 10–15 cm, peak to peak, at periods longer than a few years. The authors find that this variability, in the frequency band treated with the model, is largely caused by a deep-sea signal generated by the wind stress curl over the North Atlantic. A simple forced long Rossby wave model of the response of the thermocline to wind forcing is used, computing long-wave speeds from observed hydrographic data. The authors model the response of the ocean at periods longer than 3 years for the full width of the Atlantic and for the north–south extent of the main anticyclonic gyre, 18°–38°N. The model output in deep water shows remarkably good agreement with tide gauges, both at Bermuda (32°N) and Puerto Rico (18°N), as well as with dynamic height fluctuations of ∼20 cm peak to peak.
Once these fluctuations reach the western side of the ocean, the authors estimate coastal sea level by constructing a complementary coastal model. The coastal model is geostrophic and conserves mass within a nearshore region that encompasses the Gulf Stream. By extending this nearshore region as far south as 14°–18°N and using only the oceanic fluctuations to force the variability in the stream, between 80% and 90% of the variance of sea level at coastal tide gauges can be explained. Sea level along the coast is used to test the model assumptions. The basic results, however, seem important because they are constrained only by open ocean wind forcing and not by input boundary conditions.
Abstract
One of the puzzling features of sea level on the east coast of the United States is the decedal-scale variability;the fluctuations are 10–15 cm, peak to peak, at periods longer than a few years. The authors find that this variability, in the frequency band treated with the model, is largely caused by a deep-sea signal generated by the wind stress curl over the North Atlantic. A simple forced long Rossby wave model of the response of the thermocline to wind forcing is used, computing long-wave speeds from observed hydrographic data. The authors model the response of the ocean at periods longer than 3 years for the full width of the Atlantic and for the north–south extent of the main anticyclonic gyre, 18°–38°N. The model output in deep water shows remarkably good agreement with tide gauges, both at Bermuda (32°N) and Puerto Rico (18°N), as well as with dynamic height fluctuations of ∼20 cm peak to peak.
Once these fluctuations reach the western side of the ocean, the authors estimate coastal sea level by constructing a complementary coastal model. The coastal model is geostrophic and conserves mass within a nearshore region that encompasses the Gulf Stream. By extending this nearshore region as far south as 14°–18°N and using only the oceanic fluctuations to force the variability in the stream, between 80% and 90% of the variance of sea level at coastal tide gauges can be explained. Sea level along the coast is used to test the model assumptions. The basic results, however, seem important because they are constrained only by open ocean wind forcing and not by input boundary conditions.
Abstract
This paper describes the latest improvements applied to the Goddard profiling algorithm (GPROF), particularly as they apply to the Tropical Rainfall Measuring Mission (TRMM). Most of these improvements, however, are conceptual in nature and apply equally to other passive microwave sensors. The improvements were motivated by a notable overestimation of precipitation in the intertropical convergence zone. This problem was traced back to the algorithm's poor separation between convective and stratiform precipitation coupled with a poor separation between stratiform and transition regions in the a priori cloud model database. In addition to now using an improved convective–stratiform classification scheme, the new algorithm also makes use of emission and scattering indices instead of individual brightness temperatures. Brightness temperature indices have the advantage of being monotonic functions of rainfall. This, in turn, has allowed the algorithm to better define the uncertainties needed by the scheme's Bayesian inversion approach. Last, the algorithm over land has been modified primarily to better account for ambiguous classification where the scattering signature of precipitation could be confused with surface signals. All these changes have been implemented for both the TRMM Microwave Imager (TMI) and the Special Sensor Microwave Imager (SSM/I). Results from both sensors are very similar at the storm scale and for global averages. Surface rainfall products from the algorithm's operational version have been compared with conventional rainfall data over both land and oceans. Over oceans, GPROF results compare well with atoll gauge data. GPROF is biased negatively by 9% with a correlation of 0.86 for monthly 2.5° averages over the atolls. If only grid boxes with two or more atolls are used, the correlation increases to 0.91 but GPROF becomes positively biased by 6%. Comparisons with TRMM ground validation products from Kwajalein reveal that GPROF is negatively biased by 32%, with a correlation of 0.95 when coincident images of the TMI and Kwajalein radar are used. The absolute magnitude of rainfall measured from the Kwajalein radar, however, remains uncertain, and GPROF overestimates the rainfall by approximately 18% when compared with estimates done by a different research group. Over land, GPROF shows a positive bias of 17% and a correlation of 0.80 over monthly 5° grids when compared with the Global Precipitation Climatology Center (GPCC) gauge network. When compared with the precipitation radar (PR) over land, GPROF also retrieves higher rainfall amounts (20%). No vertical hydrometeor profile information is available over land. The correlation with the TRMM precipitation radar is 0.92 over monthly 5° grids, but GPROF is positively biased by 24% relative to the radar over oceans. Differences between TMI- and PR-derived vertical hydrometeor profiles below 2 km are consistent with this bias but become more significant with altitude. Above 8 km, the sensors disagree significantly, but the information content is low from both TMI and PR. The consistent bias between these two sensors without clear guidance from the ground-based data reinforces the need for better understanding of the physical assumptions going into these retrievals.
Abstract
This paper describes the latest improvements applied to the Goddard profiling algorithm (GPROF), particularly as they apply to the Tropical Rainfall Measuring Mission (TRMM). Most of these improvements, however, are conceptual in nature and apply equally to other passive microwave sensors. The improvements were motivated by a notable overestimation of precipitation in the intertropical convergence zone. This problem was traced back to the algorithm's poor separation between convective and stratiform precipitation coupled with a poor separation between stratiform and transition regions in the a priori cloud model database. In addition to now using an improved convective–stratiform classification scheme, the new algorithm also makes use of emission and scattering indices instead of individual brightness temperatures. Brightness temperature indices have the advantage of being monotonic functions of rainfall. This, in turn, has allowed the algorithm to better define the uncertainties needed by the scheme's Bayesian inversion approach. Last, the algorithm over land has been modified primarily to better account for ambiguous classification where the scattering signature of precipitation could be confused with surface signals. All these changes have been implemented for both the TRMM Microwave Imager (TMI) and the Special Sensor Microwave Imager (SSM/I). Results from both sensors are very similar at the storm scale and for global averages. Surface rainfall products from the algorithm's operational version have been compared with conventional rainfall data over both land and oceans. Over oceans, GPROF results compare well with atoll gauge data. GPROF is biased negatively by 9% with a correlation of 0.86 for monthly 2.5° averages over the atolls. If only grid boxes with two or more atolls are used, the correlation increases to 0.91 but GPROF becomes positively biased by 6%. Comparisons with TRMM ground validation products from Kwajalein reveal that GPROF is negatively biased by 32%, with a correlation of 0.95 when coincident images of the TMI and Kwajalein radar are used. The absolute magnitude of rainfall measured from the Kwajalein radar, however, remains uncertain, and GPROF overestimates the rainfall by approximately 18% when compared with estimates done by a different research group. Over land, GPROF shows a positive bias of 17% and a correlation of 0.80 over monthly 5° grids when compared with the Global Precipitation Climatology Center (GPCC) gauge network. When compared with the precipitation radar (PR) over land, GPROF also retrieves higher rainfall amounts (20%). No vertical hydrometeor profile information is available over land. The correlation with the TRMM precipitation radar is 0.92 over monthly 5° grids, but GPROF is positively biased by 24% relative to the radar over oceans. Differences between TMI- and PR-derived vertical hydrometeor profiles below 2 km are consistent with this bias but become more significant with altitude. Above 8 km, the sensors disagree significantly, but the information content is low from both TMI and PR. The consistent bias between these two sensors without clear guidance from the ground-based data reinforces the need for better understanding of the physical assumptions going into these retrievals.
Abstract
The Tropical Rainfall Measuring Mission (TRMM) satellite was launched on 27 November 1997, and data from all the instruments first became available approximately 30 days after the launch. Since then, much progress has been made in the calibration of the sensors, the improvement of the rainfall algorithms, and applications of these results to areas such as data assimilation and model initialization. The TRMM Microwave Imager (TMI) calibration has been corrected and verified to account for a small source of radiation leaking into the TMI receiver. The precipitation radar calibration has been adjusted upward slightly (by 0.6 dBZ) to match better the ground reference targets; the visible and infrared sensor calibration remains largely unchanged. Two versions of the TRMM rainfall algorithms are discussed. The at-launch (version 4) algorithms showed differences of 40% when averaged over the global Tropics over 30-day periods. The improvements to the rainfall algorithms that were undertaken after launch are presented, and intercomparisons of these products (version 5) show agreement improving to 24% for global tropical monthly averages. The ground-based radar rainfall product generation is discussed. Quality-control issues have delayed the routine production of these products until the summer of 2000, but comparisons of TRMM products with early versions of the ground validation products as well as with rain gauge network data suggest that uncertainties among the TRMM algorithms are of approximately the same magnitude as differences between TRMM products and ground-based rainfall estimates. The TRMM field experiment program is discussed to describe active areas of measurements and plans to use these data for further algorithm improvements. In addition to the many papers in this special issue, results coming from the analysis of TRMM products to study the diurnal cycle, the climatological description of the vertical profile of precipitation, storm types, and the distribution of shallow convection, as well as advances in data assimilation of moisture and model forecast improvements using TRMM data, are discussed in a companion TRMM special issue in the Journal of Climate (1 December 2000, Vol. 13, No. 23).
Abstract
The Tropical Rainfall Measuring Mission (TRMM) satellite was launched on 27 November 1997, and data from all the instruments first became available approximately 30 days after the launch. Since then, much progress has been made in the calibration of the sensors, the improvement of the rainfall algorithms, and applications of these results to areas such as data assimilation and model initialization. The TRMM Microwave Imager (TMI) calibration has been corrected and verified to account for a small source of radiation leaking into the TMI receiver. The precipitation radar calibration has been adjusted upward slightly (by 0.6 dBZ) to match better the ground reference targets; the visible and infrared sensor calibration remains largely unchanged. Two versions of the TRMM rainfall algorithms are discussed. The at-launch (version 4) algorithms showed differences of 40% when averaged over the global Tropics over 30-day periods. The improvements to the rainfall algorithms that were undertaken after launch are presented, and intercomparisons of these products (version 5) show agreement improving to 24% for global tropical monthly averages. The ground-based radar rainfall product generation is discussed. Quality-control issues have delayed the routine production of these products until the summer of 2000, but comparisons of TRMM products with early versions of the ground validation products as well as with rain gauge network data suggest that uncertainties among the TRMM algorithms are of approximately the same magnitude as differences between TRMM products and ground-based rainfall estimates. The TRMM field experiment program is discussed to describe active areas of measurements and plans to use these data for further algorithm improvements. In addition to the many papers in this special issue, results coming from the analysis of TRMM products to study the diurnal cycle, the climatological description of the vertical profile of precipitation, storm types, and the distribution of shallow convection, as well as advances in data assimilation of moisture and model forecast improvements using TRMM data, are discussed in a companion TRMM special issue in the Journal of Climate (1 December 2000, Vol. 13, No. 23).
Abstract
The NASA Cloud, Aerosol, and Monsoon Processes Philippines Experiment (CAMP2Ex) employed the NASA P-3, Stratton Park Engineering Company (SPEC) Learjet 35, and a host of satellites and surface sensors to characterize the coupling of aerosol processes, cloud physics, and atmospheric radiation within the Maritime Continent’s complex southwest monsoonal environment. Conducted in the late summer of 2019 from Luzon, Philippines, in conjunction with the Office of Naval Research Propagation of Intraseasonal Tropical Oscillations (PISTON) experiment with its R/V Sally Ride stationed in the northwestern tropical Pacific, CAMP2Ex documented diverse biomass burning, industrial and natural aerosol populations, and their interactions with small to congestus convection. The 2019 season exhibited El Niño conditions and associated drought, high biomass burning emissions, and an early monsoon transition allowing for observation of pristine to massively polluted environments as they advected through intricate diurnal mesoscale and radiative environments into the monsoonal trough. CAMP2Ex’s preliminary results indicate 1) increasing aerosol loadings tend to invigorate congestus convection in height and increase liquid water paths; 2) lidar, polarimetry, and geostationary Advanced Himawari Imager remote sensing sensors have skill in quantifying diverse aerosol and cloud properties and their interaction; and 3) high-resolution remote sensing technologies are able to greatly improve our ability to evaluate the radiation budget in complex cloud systems. Through the development of innovative informatics technologies, CAMP2Ex provides a benchmark dataset of an environment of extremes for the study of aerosol, cloud, and radiation processes as well as a crucible for the design of future observing systems.
Abstract
The NASA Cloud, Aerosol, and Monsoon Processes Philippines Experiment (CAMP2Ex) employed the NASA P-3, Stratton Park Engineering Company (SPEC) Learjet 35, and a host of satellites and surface sensors to characterize the coupling of aerosol processes, cloud physics, and atmospheric radiation within the Maritime Continent’s complex southwest monsoonal environment. Conducted in the late summer of 2019 from Luzon, Philippines, in conjunction with the Office of Naval Research Propagation of Intraseasonal Tropical Oscillations (PISTON) experiment with its R/V Sally Ride stationed in the northwestern tropical Pacific, CAMP2Ex documented diverse biomass burning, industrial and natural aerosol populations, and their interactions with small to congestus convection. The 2019 season exhibited El Niño conditions and associated drought, high biomass burning emissions, and an early monsoon transition allowing for observation of pristine to massively polluted environments as they advected through intricate diurnal mesoscale and radiative environments into the monsoonal trough. CAMP2Ex’s preliminary results indicate 1) increasing aerosol loadings tend to invigorate congestus convection in height and increase liquid water paths; 2) lidar, polarimetry, and geostationary Advanced Himawari Imager remote sensing sensors have skill in quantifying diverse aerosol and cloud properties and their interaction; and 3) high-resolution remote sensing technologies are able to greatly improve our ability to evaluate the radiation budget in complex cloud systems. Through the development of innovative informatics technologies, CAMP2Ex provides a benchmark dataset of an environment of extremes for the study of aerosol, cloud, and radiation processes as well as a crucible for the design of future observing systems.
Abstract
Our world is rapidly changing. Societies are facing an increase in the frequency and intensity of high-impact and extreme weather and climate events. These extremes together with exponential population growth and demographic shifts (e.g., urbanization, increase in coastal populations) are increasing the detrimental societal and economic impact of hazardous weather and climate events. Urbanization and our changing global economy have also increased the need for accurate projections of climate change and improved predictions of disruptive and potentially beneficial weather events on kilometer scales. Technological innovations are also leading to an evolving and growing role of the private sector in the weather and climate enterprise. This article discusses the challenges faced in accelerating advances in weather and climate forecasting and proposes a vision for key actions needed across the private, public, and academic sectors. Actions span (i) utilizing the new observational and computing ecosystems; (ii) strategies to advance Earth system models; (iii) ways to benefit from the growing role of artificial intelligence; (iv) practices to improve the communication of forecast information and decision support in our age of internet and social media; and (v) addressing the need to reduce the relatively large, detrimental impacts of weather and climate on all nations and especially on low-income nations. These actions will be based on a model of improved cooperation between the public, private, and academic sectors. This article represents a concise summary of the white paper on the Future of Weather and Climate Forecasting (2021) put together by the World Meteorological Organizations’ Open Consultative Platform.
Abstract
Our world is rapidly changing. Societies are facing an increase in the frequency and intensity of high-impact and extreme weather and climate events. These extremes together with exponential population growth and demographic shifts (e.g., urbanization, increase in coastal populations) are increasing the detrimental societal and economic impact of hazardous weather and climate events. Urbanization and our changing global economy have also increased the need for accurate projections of climate change and improved predictions of disruptive and potentially beneficial weather events on kilometer scales. Technological innovations are also leading to an evolving and growing role of the private sector in the weather and climate enterprise. This article discusses the challenges faced in accelerating advances in weather and climate forecasting and proposes a vision for key actions needed across the private, public, and academic sectors. Actions span (i) utilizing the new observational and computing ecosystems; (ii) strategies to advance Earth system models; (iii) ways to benefit from the growing role of artificial intelligence; (iv) practices to improve the communication of forecast information and decision support in our age of internet and social media; and (v) addressing the need to reduce the relatively large, detrimental impacts of weather and climate on all nations and especially on low-income nations. These actions will be based on a model of improved cooperation between the public, private, and academic sectors. This article represents a concise summary of the white paper on the Future of Weather and Climate Forecasting (2021) put together by the World Meteorological Organizations’ Open Consultative Platform.
The Third Comparison of Mesoscale Prediction and Research Experiment (COMPARE) workshop was held in Tokyo, Japan, on 13–15 December 1999, cosponsored by the Japan Meteorological Agency (JMA), Japan Science and Technology Agency, and the World Meteorological Organization. The third case of COMPARE focuses on an event of explosive tropical cyclone [Typhoon Flo (9019)] development that occurred during the cooperative three field experiments, the Tropical Cyclone Motion experiment 1990, Special Experiment Concerning Recurvature and Unusual Motion, and TYPHOON-90, conducted in the western North Pacific in August and September 1990. Fourteen models from nine countries have participated in at least a part of a set of experiments using a combination of four initial conditions provided and three horizontal resolutions. The resultant forecasts were collected, processed, and verified with analyses and observational data at JMA. Archived datasets have been prepared to be distributed to participating members for use in further evaluation studies.
In the workshop, preliminary conclusions from the evaluation study were presented and discussed in the light of initiatives of the experiment and from the viewpoints of tropical cyclone experts. Initial conditions, depending on both large-scale analyses and vortex bogusing, have a large impact on tropical cyclone intensity predictions. Some models succeeded in predicting the explosive deepening of the target typhoon at least qualitatively in terms of the time evolution of central pressure. Horizontal grid spacing has a very large impact on tropical cyclone intensity prediction, while the impact of vertical resolution is less clear, with some models being very sensitive and others less so. The structure of and processes in the eyewall clouds with subsidence inside as well as boundary layer and moist physical processes are considered important in the explosive development of tropical cyclones. Follow-up research activities in this case were proposed to examine possible working hypotheses related to the explosive development.
New strategies for selection of future COMPARE cases were worked out, including seven suitability requirements to be met by candidate cases. The VORTEX95 case was withdrawn as a candidate, and two other possible cases were presented and discussed.
The Third Comparison of Mesoscale Prediction and Research Experiment (COMPARE) workshop was held in Tokyo, Japan, on 13–15 December 1999, cosponsored by the Japan Meteorological Agency (JMA), Japan Science and Technology Agency, and the World Meteorological Organization. The third case of COMPARE focuses on an event of explosive tropical cyclone [Typhoon Flo (9019)] development that occurred during the cooperative three field experiments, the Tropical Cyclone Motion experiment 1990, Special Experiment Concerning Recurvature and Unusual Motion, and TYPHOON-90, conducted in the western North Pacific in August and September 1990. Fourteen models from nine countries have participated in at least a part of a set of experiments using a combination of four initial conditions provided and three horizontal resolutions. The resultant forecasts were collected, processed, and verified with analyses and observational data at JMA. Archived datasets have been prepared to be distributed to participating members for use in further evaluation studies.
In the workshop, preliminary conclusions from the evaluation study were presented and discussed in the light of initiatives of the experiment and from the viewpoints of tropical cyclone experts. Initial conditions, depending on both large-scale analyses and vortex bogusing, have a large impact on tropical cyclone intensity predictions. Some models succeeded in predicting the explosive deepening of the target typhoon at least qualitatively in terms of the time evolution of central pressure. Horizontal grid spacing has a very large impact on tropical cyclone intensity prediction, while the impact of vertical resolution is less clear, with some models being very sensitive and others less so. The structure of and processes in the eyewall clouds with subsidence inside as well as boundary layer and moist physical processes are considered important in the explosive development of tropical cyclones. Follow-up research activities in this case were proposed to examine possible working hypotheses related to the explosive development.
New strategies for selection of future COMPARE cases were worked out, including seven suitability requirements to be met by candidate cases. The VORTEX95 case was withdrawn as a candidate, and two other possible cases were presented and discussed.