Browse
Abstract
Performance in the prediction of hurricane intensity and associated hazards has been evaluated for a newly developed convection-permitting forecast system that uses ensemble data assimilation techniques to ingest high-resolution airborne radar observations from the inner core. This system performed well for three of the ten costliest Atlantic hurricanes: Ike (2008), Irene (2011), and Sandy (2012). Four to five days before these storms made landfall, the system produced good deterministic and probabilistic forecasts of not only track and intensity, but also of the spatial distributions of surface wind and rainfall. Averaged over all 102 applicable cases that have inner-core airborne Doppler radar observations during 2008–2012, the system reduced the day-2-to-day-4 intensity forecast errors by 25%–28% compared to the corresponding National Hurricane Center’s official forecasts (which have seen little or no decrease in intensity forecast errors over the past two decades). Empowered by sufficient computing resources, advances in both deterministic and probabilistic hurricane prediction will enable emergency management officials, the private sector, and the general public to make more informed decisions that minimize the losses of life and property.
Abstract
Performance in the prediction of hurricane intensity and associated hazards has been evaluated for a newly developed convection-permitting forecast system that uses ensemble data assimilation techniques to ingest high-resolution airborne radar observations from the inner core. This system performed well for three of the ten costliest Atlantic hurricanes: Ike (2008), Irene (2011), and Sandy (2012). Four to five days before these storms made landfall, the system produced good deterministic and probabilistic forecasts of not only track and intensity, but also of the spatial distributions of surface wind and rainfall. Averaged over all 102 applicable cases that have inner-core airborne Doppler radar observations during 2008–2012, the system reduced the day-2-to-day-4 intensity forecast errors by 25%–28% compared to the corresponding National Hurricane Center’s official forecasts (which have seen little or no decrease in intensity forecast errors over the past two decades). Empowered by sufficient computing resources, advances in both deterministic and probabilistic hurricane prediction will enable emergency management officials, the private sector, and the general public to make more informed decisions that minimize the losses of life and property.
While atmospheric reanalysis datasets are widely used in climate science, many technical issues hinder comparing them to each other and to observations. The reanalysis fields are stored in diverse file architectures, data formats, and resolutions. Their metadata, such as variable name and units, can also differ. Individual users have to download the fields, convert them to a common format, store them locally, change variable names, regrid if needed, and convert units. Even if a dataset can be read via the Open-Source Project for a Network Data Access Protocol (commonly known as OPeNDAP) or a similar protocol, most of this work is still needed. All of these tasks take time, effort, and money. Our group at the Cooperative Institute for Research in the Environmental Sciences at the University of Colorado and affiliated colleagues at the NOAA's Earth System Research Laboratory Physical Sciences Division have expertise both in making reanalysis datasets available and in creating web-based climate analysis tools that have been widely used throughout the meteorological community. To overcome some of the obstacles in reanalysis intercomparison, we have created a set of web-based Reanalysis Intercomparison Tools (WRIT) at www.esrl.noaa.gov/psd/data/writ/. WRIT allows users to easily plot and compare reanalysis datasets, and to test hypotheses. For standard pressure-level and surface variables there are tools to plot trajectories, monthly mean maps and vertical cross sections, and monthly mean time series. Some observational datasets are also included. Users can refine date, statistics, and plotting options. WRIT also facilitates the mission of the Reanalyses.org website as a convenient toolkit for studying the reanalysis datasets.
While atmospheric reanalysis datasets are widely used in climate science, many technical issues hinder comparing them to each other and to observations. The reanalysis fields are stored in diverse file architectures, data formats, and resolutions. Their metadata, such as variable name and units, can also differ. Individual users have to download the fields, convert them to a common format, store them locally, change variable names, regrid if needed, and convert units. Even if a dataset can be read via the Open-Source Project for a Network Data Access Protocol (commonly known as OPeNDAP) or a similar protocol, most of this work is still needed. All of these tasks take time, effort, and money. Our group at the Cooperative Institute for Research in the Environmental Sciences at the University of Colorado and affiliated colleagues at the NOAA's Earth System Research Laboratory Physical Sciences Division have expertise both in making reanalysis datasets available and in creating web-based climate analysis tools that have been widely used throughout the meteorological community. To overcome some of the obstacles in reanalysis intercomparison, we have created a set of web-based Reanalysis Intercomparison Tools (WRIT) at www.esrl.noaa.gov/psd/data/writ/. WRIT allows users to easily plot and compare reanalysis datasets, and to test hypotheses. For standard pressure-level and surface variables there are tools to plot trajectories, monthly mean maps and vertical cross sections, and monthly mean time series. Some observational datasets are also included. Users can refine date, statistics, and plotting options. WRIT also facilitates the mission of the Reanalyses.org website as a convenient toolkit for studying the reanalysis datasets.
Cumulus clouds, which are among the largest sources of uncertainty in climate change science and tropical circulation, have to-date resisted the numerous attempts made during the last six decades to unravel their cloud-scale dynamics. One major reason has been the lack of a convincing fluid-dynamical model and the difficulty of making repeatable measurements in an inherently transient flow. This article summarizes recent work showing that cumulus-type f lows can be generated in the laboratory by releasing volumetric heat into a plume above a height analogous to cloud condensation level and in quantities dynamically similar to the release of latent heat in the natural cloud. Such a “transient diabatic plume” (TDP) seems to mimic cumulus clouds with adiabatic/pseudoadiabatic processes of latent heat release. With appropriate heating profile histories, the TDP simulates a variety of cumulus-cloud forms, from cumulus congestus to cumulus fractus, and permits tracking their evolution through a complete life cycle. Selected examples of such laboratory simulations are supported by preliminary results from direct numerical simulations based on the Navier-Stokes-Boussinesq equations. These simulations suggest that the baroclinic torque plays an important role in the dynamics of both large- and small-scale motions in cloud-type flows.
Cumulus clouds, which are among the largest sources of uncertainty in climate change science and tropical circulation, have to-date resisted the numerous attempts made during the last six decades to unravel their cloud-scale dynamics. One major reason has been the lack of a convincing fluid-dynamical model and the difficulty of making repeatable measurements in an inherently transient flow. This article summarizes recent work showing that cumulus-type f lows can be generated in the laboratory by releasing volumetric heat into a plume above a height analogous to cloud condensation level and in quantities dynamically similar to the release of latent heat in the natural cloud. Such a “transient diabatic plume” (TDP) seems to mimic cumulus clouds with adiabatic/pseudoadiabatic processes of latent heat release. With appropriate heating profile histories, the TDP simulates a variety of cumulus-cloud forms, from cumulus congestus to cumulus fractus, and permits tracking their evolution through a complete life cycle. Selected examples of such laboratory simulations are supported by preliminary results from direct numerical simulations based on the Navier-Stokes-Boussinesq equations. These simulations suggest that the baroclinic torque plays an important role in the dynamics of both large- and small-scale motions in cloud-type flows.
The objective of the Observations for Model Intercomparison Projects (Obs4MIPs) is to provide observational data to the climate science community, which is analogous (in terms of variables, temporal and spatial frequency, and periods) to output from the 5th phase of the World Climate Research Programme's (WCRP) Coupled Model Intercomparison Project (CMIP5) climate model simulations. The essential aspect of the Obs4MIPs methodology is that it strictly follows the CMIP5 protocol document when selecting the observational datasets. Obs4MIPs also provides documentation that describes aspects of the observational data (e.g., data origin, instrument overview, uncertainty estimates) that are of particular relevance to scientists involved in climate model evaluation and analysis. In this paper, we focus on the activities related to the initial set of satellite observations, which are being carried out in close coordination with CMIP5 and directly engage NASA's observational (e.g., mission and instrument) science teams. Having launched Obs4MIPs with these datasets, a broader effort is also briefly discussed, striving to engage other agencies and experts who maintain datasets, including reanalysis, which can be directly used to evaluate climate models. Different strategies for using satellite observations to evaluate climate models are also briefly summarized.
The objective of the Observations for Model Intercomparison Projects (Obs4MIPs) is to provide observational data to the climate science community, which is analogous (in terms of variables, temporal and spatial frequency, and periods) to output from the 5th phase of the World Climate Research Programme's (WCRP) Coupled Model Intercomparison Project (CMIP5) climate model simulations. The essential aspect of the Obs4MIPs methodology is that it strictly follows the CMIP5 protocol document when selecting the observational datasets. Obs4MIPs also provides documentation that describes aspects of the observational data (e.g., data origin, instrument overview, uncertainty estimates) that are of particular relevance to scientists involved in climate model evaluation and analysis. In this paper, we focus on the activities related to the initial set of satellite observations, which are being carried out in close coordination with CMIP5 and directly engage NASA's observational (e.g., mission and instrument) science teams. Having launched Obs4MIPs with these datasets, a broader effort is also briefly discussed, striving to engage other agencies and experts who maintain datasets, including reanalysis, which can be directly used to evaluate climate models. Different strategies for using satellite observations to evaluate climate models are also briefly summarized.
The STAMMEX (Spatial and Temporal Scales and Mechanisms of Extreme Precipitation Events over Central Europe) project has developed a high-resolution gridded long-term precipitation dataset based on the daily-observing precipitation network of the German Weather Service DWD, which runs one of the world's densest rain gauge networks, comprising more than 7,500 stations. Several quality-controlled daily gridded products with homogenized sampling were developed covering the periods 1931–onward (with 0.5° resolution), 1951–onward (0.5° and 0.25°), and 1971–2000 (0.5°, 0.25°, and 0.1°). Different methods were tested to select the best gridding methodology that minimizes errors of integral grid estimates over hilly terrain. Besides daily precipitation values with uncertainty estimates, the STAMMEX datasets include a variety of statistics that characterize temporal and spatial dynamics of the precipitation distribution (quantiles, extremes, wet/ dry spells, etc.). Comparisons with existing continental-scale daily precipitation grids (e.g., CRU, ECA E-OBS, GCOS)—which include considerably less observations compared to those used in STAMMEX—demonstrate the added value of high-resolution grids for extreme rainfall analyses. These data exhibit spatial variability patterns and trends in precipitation extremes, which are missed or incorrectly reproduced over Central Europe from coarser resolution grids based on sparser networks. The STAMMEX dataset can be used for high-quality climate diagnostics of precipitation variability, as a reference for reanalyses and remotely sensed precipitation products (including the upcoming Global Precipitation Mission products), and for input into regional climate and operational weather forecast models.
The STAMMEX (Spatial and Temporal Scales and Mechanisms of Extreme Precipitation Events over Central Europe) project has developed a high-resolution gridded long-term precipitation dataset based on the daily-observing precipitation network of the German Weather Service DWD, which runs one of the world's densest rain gauge networks, comprising more than 7,500 stations. Several quality-controlled daily gridded products with homogenized sampling were developed covering the periods 1931–onward (with 0.5° resolution), 1951–onward (0.5° and 0.25°), and 1971–2000 (0.5°, 0.25°, and 0.1°). Different methods were tested to select the best gridding methodology that minimizes errors of integral grid estimates over hilly terrain. Besides daily precipitation values with uncertainty estimates, the STAMMEX datasets include a variety of statistics that characterize temporal and spatial dynamics of the precipitation distribution (quantiles, extremes, wet/ dry spells, etc.). Comparisons with existing continental-scale daily precipitation grids (e.g., CRU, ECA E-OBS, GCOS)—which include considerably less observations compared to those used in STAMMEX—demonstrate the added value of high-resolution grids for extreme rainfall analyses. These data exhibit spatial variability patterns and trends in precipitation extremes, which are missed or incorrectly reproduced over Central Europe from coarser resolution grids based on sparser networks. The STAMMEX dataset can be used for high-quality climate diagnostics of precipitation variability, as a reference for reanalyses and remotely sensed precipitation products (including the upcoming Global Precipitation Mission products), and for input into regional climate and operational weather forecast models.
Radiosonde-collected data are of vital importance to a wide variety of studies that aim at understanding the interaction between land surface and the atmosphere, among others. However, atmospheric measurements in developing countries, some of which encompass areas critical to the regulation of global climate, are sparse due to the lack of funding allocated toward collecting such data, and therefore fail to meet the standards set by the World Meteorological Organization. We review current radiosonde technologies and an alternative that aims at lowering sounding costs by recovering the sondes: the glidersonde. Two major issues currently hamper future development and commercialization of this technology: 1) how to have reusable radiosondes while keeping the market viable for the sonde manufacturers, and 2) the need for consistent and effective governmental aviation regulations for developing and flying glidersondes. We conclude this review with an alternative consideration as an incentive for cooperation in the development and implementation of cost-effective sounding equipment.
Radiosonde-collected data are of vital importance to a wide variety of studies that aim at understanding the interaction between land surface and the atmosphere, among others. However, atmospheric measurements in developing countries, some of which encompass areas critical to the regulation of global climate, are sparse due to the lack of funding allocated toward collecting such data, and therefore fail to meet the standards set by the World Meteorological Organization. We review current radiosonde technologies and an alternative that aims at lowering sounding costs by recovering the sondes: the glidersonde. Two major issues currently hamper future development and commercialization of this technology: 1) how to have reusable radiosondes while keeping the market viable for the sonde manufacturers, and 2) the need for consistent and effective governmental aviation regulations for developing and flying glidersondes. We conclude this review with an alternative consideration as an incentive for cooperation in the development and implementation of cost-effective sounding equipment.