Search Results
You are looking at 1 - 10 of 16 items for
- Author or Editor: J. Lean x
- Refine by Access: All Content x
Abstract
No abstract available.
Abstract
No abstract available.
Abstract
The experiment reported on here presents a realistic portrayal of Amazonian deforestation that uses measurements of vegetation characteristics, taken as part of the Anglo–Brazilian Amazonian Climate Observation Study field campaigns, to define the forest and replacement pasture vegetation in the Hadley Centre GCM. The duration of the main experiment (10 yr) leads to greater confidence in assessing regional changes than in previous shorter experiments.
Complete removal of the Amazonian forest produced area-mean changes that resemble earlier experiments with decreases in evaporation of 0.76 mm day−1 (18%) and rainfall of 0.27 mm day−1 (4%) and a rise in surface temperature of 2.3°C. However, the relative changes in magnitude indicate that increased moisture convergence partly compensates for the reduced evaporation, in contrast to many previous deforestation experiments. Results also showed large regional variations in the change in annual mean rainfall over South America, with widespread decreases over most of the deforested area and increases near the Andes.
A better understanding of the mechanisms responsible for the final deforested climate has been gained by carrying out additional experiments that examine the response to separate changes in roughness and albedo. Increased albedo resulted in widespread significant decreases in rainfall due to less moisture convergence and ascent. The response to reduced roughness is more complex but of comparable importance; in this experiment it was dominated by an increase in low-level wind speeds resulting in decreased moisture convergence and rainfall near the upwind edge of the area and the opposite near the downwind boundary where the increased flow meets the Andes.
In the standard deforestation scenario all vegetation parameters were modified together with one soil parameter—the maximum infiltration rate, which is reduced to represent the observed compaction of soil following deforestation. Results from a further experiment, in which the maximum infiltration rate was left unchanged, showed much smaller reductions in evaporation of 0.3 mm day−1 (7%) and indicated that the predicted regional changes in rainfall and evaporation were very sensitive to this parameter.
Abstract
The experiment reported on here presents a realistic portrayal of Amazonian deforestation that uses measurements of vegetation characteristics, taken as part of the Anglo–Brazilian Amazonian Climate Observation Study field campaigns, to define the forest and replacement pasture vegetation in the Hadley Centre GCM. The duration of the main experiment (10 yr) leads to greater confidence in assessing regional changes than in previous shorter experiments.
Complete removal of the Amazonian forest produced area-mean changes that resemble earlier experiments with decreases in evaporation of 0.76 mm day−1 (18%) and rainfall of 0.27 mm day−1 (4%) and a rise in surface temperature of 2.3°C. However, the relative changes in magnitude indicate that increased moisture convergence partly compensates for the reduced evaporation, in contrast to many previous deforestation experiments. Results also showed large regional variations in the change in annual mean rainfall over South America, with widespread decreases over most of the deforested area and increases near the Andes.
A better understanding of the mechanisms responsible for the final deforested climate has been gained by carrying out additional experiments that examine the response to separate changes in roughness and albedo. Increased albedo resulted in widespread significant decreases in rainfall due to less moisture convergence and ascent. The response to reduced roughness is more complex but of comparable importance; in this experiment it was dominated by an increase in low-level wind speeds resulting in decreased moisture convergence and rainfall near the upwind edge of the area and the opposite near the downwind boundary where the increased flow meets the Andes.
In the standard deforestation scenario all vegetation parameters were modified together with one soil parameter—the maximum infiltration rate, which is reduced to represent the observed compaction of soil following deforestation. Results from a further experiment, in which the maximum infiltration rate was left unchanged, showed much smaller reductions in evaporation of 0.3 mm day−1 (7%) and indicated that the predicted regional changes in rainfall and evaporation were very sensitive to this parameter.
Abstract
Climate data from the Anglo–Brazilian Amazonian Climate Observation Study have been compared with the simulations of three general circulation models with prognostic cloud schemes. Monthly averages of net all-wave radiation, incoming solar radiation, net longwave radiation, and precipitation obtained from automatic weather stations sited in three areas of Amazonia are compared with the output from the unified model of the Hadley Centre for Climate Prediction and Research, the operational forecasting model of the European Centre for Medium-Range Weather Forecasts (ECMWF), and the model of the Laboratoire de Météorologie Dynamique (LMD). The performance of the models is much improved when compared to comparisons of observations with the output from earlier, less sophisticated models. However, the Hadley Centre and LMD models tend to overpredict net and solar radiation, and the ECMWF model underpredicts net and solar radiation at two of the sites, but performs very well in Manaus. It is shown that the errors are mainly linked to the amount of cloud cover produced by the models, but also to the incoming clear sky solar radiation.
Abstract
Climate data from the Anglo–Brazilian Amazonian Climate Observation Study have been compared with the simulations of three general circulation models with prognostic cloud schemes. Monthly averages of net all-wave radiation, incoming solar radiation, net longwave radiation, and precipitation obtained from automatic weather stations sited in three areas of Amazonia are compared with the output from the unified model of the Hadley Centre for Climate Prediction and Research, the operational forecasting model of the European Centre for Medium-Range Weather Forecasts (ECMWF), and the model of the Laboratoire de Météorologie Dynamique (LMD). The performance of the models is much improved when compared to comparisons of observations with the output from earlier, less sophisticated models. However, the Hadley Centre and LMD models tend to overpredict net and solar radiation, and the ECMWF model underpredicts net and solar radiation at two of the sites, but performs very well in Manaus. It is shown that the errors are mainly linked to the amount of cloud cover produced by the models, but also to the incoming clear sky solar radiation.
Abstract
Bayesian model averaging (BMA) is a statistical way of postprocessing forecast ensembles to create predictive probability density functions (PDFs) for weather quantities. It represents the predictive PDF as a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights are posterior probabilities of the models generating the forecasts and reflect the forecasts’ relative contributions to predictive skill over a training period. It was developed initially for quantities whose PDFs can be approximated by normal distributions, such as temperature and sea level pressure. BMA does not apply in its original form to precipitation, because the predictive PDF of precipitation is nonnormal in two major ways: it has a positive probability of being equal to zero, and it is skewed. In this study BMA is extended to probabilistic quantitative precipitation forecasting. The predictive PDF corresponding to one ensemble member is a mixture of a discrete component at zero and a gamma distribution. Unlike methods that predict the probability of exceeding a threshold, BMA gives a full probability distribution for future precipitation. The method was applied to daily 48-h forecasts of 24-h accumulated precipitation in the North American Pacific Northwest in 2003–04 using the University of Washington mesoscale ensemble. It yielded predictive distributions that were calibrated and sharp. It also gave probability of precipitation forecasts that were much better calibrated than those based on consensus voting of the ensemble members. It gave better estimates of the probability of high-precipitation events than logistic regression on the cube root of the ensemble mean.
Abstract
Bayesian model averaging (BMA) is a statistical way of postprocessing forecast ensembles to create predictive probability density functions (PDFs) for weather quantities. It represents the predictive PDF as a weighted average of PDFs centered on the individual bias-corrected forecasts, where the weights are posterior probabilities of the models generating the forecasts and reflect the forecasts’ relative contributions to predictive skill over a training period. It was developed initially for quantities whose PDFs can be approximated by normal distributions, such as temperature and sea level pressure. BMA does not apply in its original form to precipitation, because the predictive PDF of precipitation is nonnormal in two major ways: it has a positive probability of being equal to zero, and it is skewed. In this study BMA is extended to probabilistic quantitative precipitation forecasting. The predictive PDF corresponding to one ensemble member is a mixture of a discrete component at zero and a gamma distribution. Unlike methods that predict the probability of exceeding a threshold, BMA gives a full probability distribution for future precipitation. The method was applied to daily 48-h forecasts of 24-h accumulated precipitation in the North American Pacific Northwest in 2003–04 using the University of Washington mesoscale ensemble. It yielded predictive distributions that were calibrated and sharp. It also gave probability of precipitation forecasts that were much better calibrated than those based on consensus voting of the ensemble members. It gave better estimates of the probability of high-precipitation events than logistic regression on the cube root of the ensemble mean.
Abstract
We present a new climate data record for total solar irradiance and solar spectral irradiance between 1610 and the present day with associated wavelength and time-dependent uncertainties and quarterly updates. The data record, which is part of the National Oceanic and Atmospheric Administration’s (NOAA) Climate Data Record (CDR) program, provides a robust, sustainable, and scientifically defensible record of solar irradiance that is of sufficient length, consistency, and continuity for use in studies of climate variability and climate change on multiple time scales and for user groups spanning climate modeling, remote sensing, and natural resource and renewable energy industries. The data record, jointly developed by the University of Colorado’s Laboratory for Atmospheric and Space Physics (LASP) and the Naval Research Laboratory (NRL), is constructed from solar irradiance models that determine the changes with respect to quiet sun conditions when facular brightening and sunspot darkening features are present on the solar disk where the magnitude of the changes in irradiance are determined from the linear regression of a proxy magnesium (Mg) II index and sunspot area indices against the approximately decade-long solar irradiance measurements of the Solar Radiation and Climate Experiment (SORCE). To promote long-term data usage and sharing for a broad range of users, the source code, the dataset itself, and supporting documentation are archived at NOAA’s National Centers for Environmental Information (NCEI). In the future, the dataset will also be available through the LASP Interactive Solar Irradiance Data Center (LISIRD) for user-specified time periods and spectral ranges of interest.
Abstract
We present a new climate data record for total solar irradiance and solar spectral irradiance between 1610 and the present day with associated wavelength and time-dependent uncertainties and quarterly updates. The data record, which is part of the National Oceanic and Atmospheric Administration’s (NOAA) Climate Data Record (CDR) program, provides a robust, sustainable, and scientifically defensible record of solar irradiance that is of sufficient length, consistency, and continuity for use in studies of climate variability and climate change on multiple time scales and for user groups spanning climate modeling, remote sensing, and natural resource and renewable energy industries. The data record, jointly developed by the University of Colorado’s Laboratory for Atmospheric and Space Physics (LASP) and the Naval Research Laboratory (NRL), is constructed from solar irradiance models that determine the changes with respect to quiet sun conditions when facular brightening and sunspot darkening features are present on the solar disk where the magnitude of the changes in irradiance are determined from the linear regression of a proxy magnesium (Mg) II index and sunspot area indices against the approximately decade-long solar irradiance measurements of the Solar Radiation and Climate Experiment (SORCE). To promote long-term data usage and sharing for a broad range of users, the source code, the dataset itself, and supporting documentation are archived at NOAA’s National Centers for Environmental Information (NCEI). In the future, the dataset will also be available through the LASP Interactive Solar Irradiance Data Center (LISIRD) for user-specified time periods and spectral ranges of interest.
Abstract
We describe the historical evolution of the conceptualization, formulation, quantification, application, and utilization of “radiative forcing” (RF) of Earth’s climate. Basic theories of shortwave and longwave radiation were developed through the nineteenth and twentieth centuries and established the analytical framework for defining and quantifying the perturbations to Earth’s radiative energy balance by natural and anthropogenic influences. The insight that Earth’s climate could be radiatively forced by changes in carbon dioxide, first introduced in the nineteenth century, gained empirical support with sustained observations of the atmospheric concentrations of the gas beginning in 1957. Advances in laboratory and field measurements, theory, instrumentation, computational technology, data, and analysis of well-mixed greenhouse gases and the global climate system through the twentieth century enabled the development and formalism of RF; this allowed RF to be related to changes in global-mean surface temperature with the aid of increasingly sophisticated models. This in turn led to RF becoming firmly established as a principal concept in climate science by 1990. The linkage with surface temperature has proven to be the most important application of the RF concept, enabling a simple metric to evaluate the relative climate impacts of different agents. The late 1970s and 1980s saw accelerated developments in quantification, including the first assessment of the effect of the forcing due to the doubling of carbon dioxide on climate (the “Charney” report). The concept was subsequently extended to a wide variety of agents beyond well-mixed greenhouse gases (WMGHGs; carbon dioxide, methane, nitrous oxide, and halocarbons) to short-lived species such as ozone. The WMO and IPCC international assessments began the important sequence of periodic evaluations and quantifications of the forcings by natural (solar irradiance changes and stratospheric aerosols resulting from volcanic eruptions) and a growing set of anthropogenic agents (WMGHGs, ozone, aerosols, land surface changes, contrails). From the 1990s to the present, knowledge and scientific confidence in the radiative agents acting on the climate system have proliferated. The conceptual basis of RF has also evolved as both our understanding of the way radiative forcing drives climate change and the diversity of the forcing mechanisms have grown. This has led to the current situation where “effective radiative forcing” (ERF) is regarded as the preferred practical definition of radiative forcing in order to better capture the link between forcing and global-mean surface temperature change. The use of ERF, however, comes with its own attendant issues, including challenges in its diagnosis from climate models, its applications to small forcings, and blurring of the distinction between rapid climate adjustments (fast responses) and climate feedbacks; this will necessitate further elaboration of its utility in the future. Global climate model simulations of radiative perturbations by various agents have established how the forcings affect other climate variables besides temperature (e.g., precipitation). The forcing–response linkage as simulated by models, including the diversity in the spatial distribution of forcings by the different agents, has provided a practical demonstration of the effectiveness of agents in perturbing the radiative energy balance and causing climate changes. The significant advances over the past half century have established, with very high confidence, that the global-mean ERF due to human activity since preindustrial times is positive (the 2013 IPCC assessment gives a best estimate of 2.3 W m−2, with a range from 1.1 to 3.3 W m−2; 90% confidence interval). Further, except in the immediate aftermath of climatically significant volcanic eruptions, the net anthropogenic forcing dominates over natural radiative forcing mechanisms. Nevertheless, the substantial remaining uncertainty in the net anthropogenic ERF leads to large uncertainties in estimates of climate sensitivity from observations and in predicting future climate impacts. The uncertainty in the ERF arises principally from the incorporation of the rapid climate adjustments in the formulation, the well-recognized difficulties in characterizing the preindustrial state of the atmosphere, and the incomplete knowledge of the interactions of aerosols with clouds. This uncertainty impairs the quantitative evaluation of climate adaptation and mitigation pathways in the future. A grand challenge in Earth system science lies in continuing to sustain the relatively simple essence of the radiative forcing concept in a form similar to that originally devised, and at the same time improving the quantification of the forcing. This, in turn, demands an accurate, yet increasingly complex and comprehensive, accounting of the relevant processes in the climate system.
Abstract
We describe the historical evolution of the conceptualization, formulation, quantification, application, and utilization of “radiative forcing” (RF) of Earth’s climate. Basic theories of shortwave and longwave radiation were developed through the nineteenth and twentieth centuries and established the analytical framework for defining and quantifying the perturbations to Earth’s radiative energy balance by natural and anthropogenic influences. The insight that Earth’s climate could be radiatively forced by changes in carbon dioxide, first introduced in the nineteenth century, gained empirical support with sustained observations of the atmospheric concentrations of the gas beginning in 1957. Advances in laboratory and field measurements, theory, instrumentation, computational technology, data, and analysis of well-mixed greenhouse gases and the global climate system through the twentieth century enabled the development and formalism of RF; this allowed RF to be related to changes in global-mean surface temperature with the aid of increasingly sophisticated models. This in turn led to RF becoming firmly established as a principal concept in climate science by 1990. The linkage with surface temperature has proven to be the most important application of the RF concept, enabling a simple metric to evaluate the relative climate impacts of different agents. The late 1970s and 1980s saw accelerated developments in quantification, including the first assessment of the effect of the forcing due to the doubling of carbon dioxide on climate (the “Charney” report). The concept was subsequently extended to a wide variety of agents beyond well-mixed greenhouse gases (WMGHGs; carbon dioxide, methane, nitrous oxide, and halocarbons) to short-lived species such as ozone. The WMO and IPCC international assessments began the important sequence of periodic evaluations and quantifications of the forcings by natural (solar irradiance changes and stratospheric aerosols resulting from volcanic eruptions) and a growing set of anthropogenic agents (WMGHGs, ozone, aerosols, land surface changes, contrails). From the 1990s to the present, knowledge and scientific confidence in the radiative agents acting on the climate system have proliferated. The conceptual basis of RF has also evolved as both our understanding of the way radiative forcing drives climate change and the diversity of the forcing mechanisms have grown. This has led to the current situation where “effective radiative forcing” (ERF) is regarded as the preferred practical definition of radiative forcing in order to better capture the link between forcing and global-mean surface temperature change. The use of ERF, however, comes with its own attendant issues, including challenges in its diagnosis from climate models, its applications to small forcings, and blurring of the distinction between rapid climate adjustments (fast responses) and climate feedbacks; this will necessitate further elaboration of its utility in the future. Global climate model simulations of radiative perturbations by various agents have established how the forcings affect other climate variables besides temperature (e.g., precipitation). The forcing–response linkage as simulated by models, including the diversity in the spatial distribution of forcings by the different agents, has provided a practical demonstration of the effectiveness of agents in perturbing the radiative energy balance and causing climate changes. The significant advances over the past half century have established, with very high confidence, that the global-mean ERF due to human activity since preindustrial times is positive (the 2013 IPCC assessment gives a best estimate of 2.3 W m−2, with a range from 1.1 to 3.3 W m−2; 90% confidence interval). Further, except in the immediate aftermath of climatically significant volcanic eruptions, the net anthropogenic forcing dominates over natural radiative forcing mechanisms. Nevertheless, the substantial remaining uncertainty in the net anthropogenic ERF leads to large uncertainties in estimates of climate sensitivity from observations and in predicting future climate impacts. The uncertainty in the ERF arises principally from the incorporation of the rapid climate adjustments in the formulation, the well-recognized difficulties in characterizing the preindustrial state of the atmosphere, and the incomplete knowledge of the interactions of aerosols with clouds. This uncertainty impairs the quantitative evaluation of climate adaptation and mitigation pathways in the future. A grand challenge in Earth system science lies in continuing to sustain the relatively simple essence of the radiative forcing concept in a form similar to that originally devised, and at the same time improving the quantification of the forcing. This, in turn, demands an accurate, yet increasingly complex and comprehensive, accounting of the relevant processes in the climate system.
Abstract
A new frontier in weather forecasting is emerging by operational forecast models now being run at convection-permitting resolutions at many national weather services. However, this is not a panacea; significant systematic errors remain in the character of convective storms and rainfall distributions. The Dynamical and Microphysical Evolution of Convective Storms (DYMECS) project is taking a fundamentally new approach to evaluate and improve such models: rather than relying on a limited number of cases, which may not be representative, the authors have gathered a large database of 3D storm structures on 40 convective days using the Chilbolton radar in southern England. They have related these structures to storm life cycles derived by tracking features in the rainfall from the U.K. radar network and compared them statistically to storm structures in the Met Office model, which they ran at horizontal grid length between 1.5 km and 100 m, including simulations with different subgrid mixing length. The authors also evaluated the scale and intensity of convective updrafts using a new radar technique. They find that the horizontal size of simulated convective storms and the updrafts within them is much too large at 1.5-km resolution, such that the convective mass flux of individual updrafts can be too large by an order of magnitude. The scale of precipitation cores and updrafts decreases steadily with decreasing grid lengths, as does the typical storm lifetime. The 200-m grid-length simulation with standard mixing length performs best over all diagnostics, although a greater mixing length improves the representation of deep convective storms.
Abstract
A new frontier in weather forecasting is emerging by operational forecast models now being run at convection-permitting resolutions at many national weather services. However, this is not a panacea; significant systematic errors remain in the character of convective storms and rainfall distributions. The Dynamical and Microphysical Evolution of Convective Storms (DYMECS) project is taking a fundamentally new approach to evaluate and improve such models: rather than relying on a limited number of cases, which may not be representative, the authors have gathered a large database of 3D storm structures on 40 convective days using the Chilbolton radar in southern England. They have related these structures to storm life cycles derived by tracking features in the rainfall from the U.K. radar network and compared them statistically to storm structures in the Met Office model, which they ran at horizontal grid length between 1.5 km and 100 m, including simulations with different subgrid mixing length. The authors also evaluated the scale and intensity of convective updrafts using a new radar technique. They find that the horizontal size of simulated convective storms and the updrafts within them is much too large at 1.5-km resolution, such that the convective mass flux of individual updrafts can be too large by an order of magnitude. The scale of precipitation cores and updrafts decreases steadily with decreasing grid lengths, as does the typical storm lifetime. The 200-m grid-length simulation with standard mixing length performs best over all diagnostics, although a greater mixing length improves the representation of deep convective storms.
Abstract
A set of high-resolution radar observations of convective storms has been collected to evaluate such storms in the Met Office Unified Model during the Dynamical and Microphysical Evolution of Convective Storms (DYMECS) project. The 3-GHz Chilbolton Advanced Meteorological Radar was set up with a scan-scheduling algorithm to automatically track convective storms identified in real time from the operational rainfall radar network. More than 1000 storm observations gathered over 15 days in 2011 and 2012 are used to evaluate the model under various synoptic conditions supporting convection. In terms of the detailed three-dimensional morphology, storms in the 1500-m grid length simulations are shown to produce horizontal structures a factor of 1.5–2 wider compared to radar observations. A set of nested model runs at grid lengths down to 100 m show that the models converge in terms of storm width, but the storm structures in the simulations with the smallest grid lengths are too narrow and too intense compared to the radar observations. The modeled storms were surrounded by a region of drizzle without ice reflectivities above 0 dBZ aloft, which was related to the dominance of ice crystals and was improved by allowing only aggregates as an ice particle habit. Simulations with graupel outperformed the standard configuration for heavy-rain profiles, but the storm structures were a factor of 2 too wide and the convective cores 2 km too deep.
Abstract
A set of high-resolution radar observations of convective storms has been collected to evaluate such storms in the Met Office Unified Model during the Dynamical and Microphysical Evolution of Convective Storms (DYMECS) project. The 3-GHz Chilbolton Advanced Meteorological Radar was set up with a scan-scheduling algorithm to automatically track convective storms identified in real time from the operational rainfall radar network. More than 1000 storm observations gathered over 15 days in 2011 and 2012 are used to evaluate the model under various synoptic conditions supporting convection. In terms of the detailed three-dimensional morphology, storms in the 1500-m grid length simulations are shown to produce horizontal structures a factor of 1.5–2 wider compared to radar observations. A set of nested model runs at grid lengths down to 100 m show that the models converge in terms of storm width, but the storm structures in the simulations with the smallest grid lengths are too narrow and too intense compared to the radar observations. The modeled storms were surrounded by a region of drizzle without ice reflectivities above 0 dBZ aloft, which was related to the dominance of ice crystals and was improved by allowing only aggregates as an ice particle habit. Simulations with graupel outperformed the standard configuration for heavy-rain profiles, but the storm structures were a factor of 2 too wide and the convective cores 2 km too deep.
Abstract
This paper presents a conically scanning spaceborne Dopplerized 94-GHz radar Earth science mission concept: Wind Velocity Radar Nephoscope (WIVERN). WIVERN aims to provide global measurements of in-cloud winds using the Doppler-shifted radar returns from hydrometeors. The conically scanning radar could provide wind data with daily revisits poleward of 50°, 50-km horizontal resolution, and approximately 1-km vertical resolution. The measured winds, when assimilated into weather forecasts and provided they are representative of the larger-scale mean flow, should lead to further improvements in the accuracy and effectiveness of forecasts of severe weather and better focusing of activities to limit damage and loss of life. It should also be possible to characterize the more variable winds associated with local convection. Polarization diversity would be used to enable high wind speeds to be unambiguously observed; analysis indicates that artifacts associated with polarization diversity are rare and can be identified. Winds should be measurable down to 1 km above the ocean surface and 2 km over land. The potential impact of the WIVERN winds on reducing forecast errors is estimated by comparison with the known positive impact of cloud motion and aircraft winds. The main thrust of WIVERN is observing in-cloud winds, but WIVERN should also provide global estimates of ice water content, cloud cover, and vertical distribution, continuing the data series started by CloudSat with the conical scan giving increased coverage. As with CloudSat, estimates of rainfall and snowfall rates should be possible. These nonwind products may also have a positive impact when assimilated into weather forecasts.
Abstract
This paper presents a conically scanning spaceborne Dopplerized 94-GHz radar Earth science mission concept: Wind Velocity Radar Nephoscope (WIVERN). WIVERN aims to provide global measurements of in-cloud winds using the Doppler-shifted radar returns from hydrometeors. The conically scanning radar could provide wind data with daily revisits poleward of 50°, 50-km horizontal resolution, and approximately 1-km vertical resolution. The measured winds, when assimilated into weather forecasts and provided they are representative of the larger-scale mean flow, should lead to further improvements in the accuracy and effectiveness of forecasts of severe weather and better focusing of activities to limit damage and loss of life. It should also be possible to characterize the more variable winds associated with local convection. Polarization diversity would be used to enable high wind speeds to be unambiguously observed; analysis indicates that artifacts associated with polarization diversity are rare and can be identified. Winds should be measurable down to 1 km above the ocean surface and 2 km over land. The potential impact of the WIVERN winds on reducing forecast errors is estimated by comparison with the known positive impact of cloud motion and aircraft winds. The main thrust of WIVERN is observing in-cloud winds, but WIVERN should also provide global estimates of ice water content, cloud cover, and vertical distribution, continuing the data series started by CloudSat with the conical scan giving increased coverage. As with CloudSat, estimates of rainfall and snowfall rates should be possible. These nonwind products may also have a positive impact when assimilated into weather forecasts.
Abstract
In recent years, a growing partnership has emerged between the Met Office and the designated U.S. national centers for expertise in severe weather research and forecasting, that is, the National Oceanic and Atmospheric Administration (NOAA) National Severe Storms Laboratory (NSSL) and the NOAA Storm Prediction Center (SPC). The driving force behind this partnership is a compelling set of mutual interests related to predicting and understanding high-impact weather and using high-resolution numerical weather prediction models as foundational tools to explore these interests.
The forum for this collaborative activity is the NOAA Hazardous Weather Testbed, where annual Spring Forecasting Experiments (SFEs) are conducted by NSSL and SPC. For the last decade, NSSL and SPC have used these experiments to find ways that high-resolution models can help achieve greater success in the prediction of tornadoes, large hail, and damaging winds. Beginning in 2012, the Met Office became a contributing partner in annual SFEs, bringing complementary expertise in the use of convection-allowing models, derived in their case from a parallel decadelong effort to use these models to advance prediction of flash floods associated with heavy thunderstorms.
The collaboration between NSSL, SPC, and the Met Office has been enthusiastic and productive, driven by strong mutual interests at a grassroots level and generous institutional support from the parent government agencies. In this article, a historical background is provided, motivations for collaborative activities are emphasized, and preliminary results are highlighted.
Abstract
In recent years, a growing partnership has emerged between the Met Office and the designated U.S. national centers for expertise in severe weather research and forecasting, that is, the National Oceanic and Atmospheric Administration (NOAA) National Severe Storms Laboratory (NSSL) and the NOAA Storm Prediction Center (SPC). The driving force behind this partnership is a compelling set of mutual interests related to predicting and understanding high-impact weather and using high-resolution numerical weather prediction models as foundational tools to explore these interests.
The forum for this collaborative activity is the NOAA Hazardous Weather Testbed, where annual Spring Forecasting Experiments (SFEs) are conducted by NSSL and SPC. For the last decade, NSSL and SPC have used these experiments to find ways that high-resolution models can help achieve greater success in the prediction of tornadoes, large hail, and damaging winds. Beginning in 2012, the Met Office became a contributing partner in annual SFEs, bringing complementary expertise in the use of convection-allowing models, derived in their case from a parallel decadelong effort to use these models to advance prediction of flash floods associated with heavy thunderstorms.
The collaboration between NSSL, SPC, and the Met Office has been enthusiastic and productive, driven by strong mutual interests at a grassroots level and generous institutional support from the parent government agencies. In this article, a historical background is provided, motivations for collaborative activities are emphasized, and preliminary results are highlighted.