Search Results

You are looking at 1 - 10 of 12 items for

  • Author or Editor: Barbara Casati x
  • Refine by Access: All Content x
Clear All Modify Search
Barbara Casati, Abderrahmane Yagouti, and Diane Chaumont

Abstract

Public health planning needs the support of evidence-based information on current and future climate, which could be used by health professionals and decision makers to better understand and respond to the health impacts of extreme heat. Climate models provide information regarding the expected increase in temperatures and extreme heat events with climate change and can help predict the severity of future health impacts, which can be used in the public health sector for the development of adaptation strategies to reduce heat-related morbidity and mortality. This study analyzes the evolution of extreme temperature indices specifically defined to characterize heat events associated with health risks, in the context of a changing climate. The analysis is performed by using temperature projections from the Canadian Regional Climate Model. A quantile-based statistical correction is applied to the projected temperatures, in order to reduce model biases and account for the representativeness error. Moreover, generalized Pareto distributions are used to extend the temperature distribution upper tails and extrapolate the statistical correction to extremes that are not observed in the present but that might occur in the future. The largest increase in extreme daytime temperatures occurs in southern Manitoba, Canada, where the already overly dry climate and lack of soil moisture can lead to an uncontrolled enhancement of hot extremes. The occurrence of warm nights and heat waves, on the other hand, is already large and will increase substantially in the communities of the Great Lakes region, characterized by a humid climate. Impact and adaptation studies need to account for the temperature variability due to local effects, since it can be considerably larger than the model natural variability.

Full access
Morten Køltzow, Barbara Casati, Eric Bazile, Thomas Haiden, and Teresa Valkonen

Abstract

Increased human activity in the Arctic calls for accurate and reliable weather predictions. This study presents an intercomparison of operational and/or high-resolution models in an attempt to establish a baseline for present-day Arctic short-range forecast capabilities for near-surface weather (pressure, wind speed, temperature, precipitation, and total cloud cover) during winter. One global model [the high-resolution version of the ECMWF Integrated Forecasting System (IFS-HRES)], and three high-resolution, limited-area models [Applications of Research to Operations at Mesoscale (AROME)-Arctic, Canadian Arctic Prediction System (CAPS), and AROME with Météo-France setup (MF-AROME)] are evaluated. As part of the model intercomparison, several aspects of the impact of observation errors and representativeness on the verification are discussed. The results show how the forecasts differ in their spatial details and how forecast accuracy varies with region, parameter, lead time, weather, and forecast system, and they confirm many findings from mid- or lower latitudes. While some weaknesses are unique or more pronounced in some of the systems, several common model deficiencies are found, such as forecasting temperature during cloud-free, calm weather; a cold bias in windy conditions; the distinction between freezing and melting conditions; underestimation of solid precipitation; less skillful wind speed forecasts over land than over ocean; and difficulties with small-scale spatial variability. The added value of high-resolution limited area models is most pronounced for wind speed and temperature in regions with complex terrain and coastlines. However, forecast errors grow faster in the high-resolution models. This study also shows that observation errors and representativeness can account for a substantial part of the difference between forecast and observations in standard verification.

Open access
Morten Køltzow, Barbara Casati, Thomas Haiden, and Teresa Valkonen

Abstract

Assessing the quality of precipitation forecasts requires observations, but all precipitation observations have associated uncertainties making it difficult to quantify the true forecast quality. One of the largest uncertainties is due to the wind-induced undercatch of solid precipitation gauge measurements. This study discusses how this impacts the verification of precipitation forecasts for Norway for one global model [the high-resolution version of the ECMWF Integrated Forecasting System (IFS-HRES)], and one high-resolution, limited-area model [Applications of Research to Operations at Mesoscale (MEPS)]. First, the forecasts are compared with high-quality reference measurements (less undercatch) and with more simple measurement equipment commonly available (substantial undercatch) at the Haukeliseter observation site. Then the verification is extended to include all Norwegian observation sites: 1) stratified by wind speed, since calm (windy) conditions experience less (more) undercatch; and 2) by applying transfer functions, which convert measured precipitation to what would have been measured with high-quality equipment with less undercatch, before the forecast–observation comparison is performed. Results show that the wind-induced undercatch of solid precipitation has a substantial impact on verification results. Furthermore, applying transfer functions to adjust for wind-induced undercatch of solid precipitation gives a more realistic picture of true forecast capabilities. In particular, estimates of systematic forecast biases are improved, and to a lesser degree, verification scores like correlation, RMSE, ETS, and stable equitable error in probability space (SEEPS). However, uncertainties associated with applying transfer functions are substantial and need to be taken into account in the verification process. Precipitation forecast verification for liquid and solid precipitation should be done separately whenever possible.

Open access
Eric Gilleland, David Ahijevych, Barbara G. Brown, Barbara Casati, and Elizabeth E. Ebert

Abstract

Advancements in weather forecast models and their enhanced resolution have led to substantially improved and more realistic-appearing forecasts for some variables. However, traditional verification scores often indicate poor performance because of the increased small-scale variability so that the true quality of the forecasts is not always characterized well. As a result, numerous new methods for verifying these forecasts have been proposed. These new methods can mostly be classified into two overall categories: filtering methods and displacement methods. The filtering methods can be further delineated into neighborhood and scale separation, and the displacement methods can be divided into features based and field deformation. Each method gives considerably more information than the traditional scores, but it is not clear which method(s) should be used for which purpose.

A verification methods intercomparison project has been established in order to glean a better understanding of the proposed methods in terms of their various characteristics and to determine what verification questions each method addresses. The study is ongoing, and preliminary qualitative results for the different approaches applied to different situations are described here. In particular, the various methods and their basic characteristics, similarities, and differences are described. In addition, several questions are addressed regarding the application of the methods and the information that they provide. These questions include (i) how the method(s) inform performance at different scales; (ii) how the methods provide information on location errors; (iii) whether the methods provide information on intensity errors and distributions; (iv) whether the methods provide information on structure errors; (v) whether the approaches have the ability to provide information about hits, misses, and false alarms; (vi) whether the methods do anything that is counterintuitive; (vii) whether the methods have selectable parameters and how sensitive the results are to parameter selection; (viii) whether the results can be easily aggregated across multiple cases; (ix) whether the methods can identify timing errors; and (x) whether confidence intervals and hypothesis tests can be readily computed.

Full access
Manfred Dorninger, Eric Gilleland, Barbara Casati, Marion P. Mittermaier, Elizabeth E. Ebert, Barbara G. Brown, and Laurence J. Wilson

Abstract

Recent advancements in numerical weather prediction (NWP) and the enhancement of model resolution have created the need for more robust and informative verification methods. In response to these needs, a plethora of spatial verification approaches have been developed in the past two decades. A spatial verification method intercomparison was established in 2007 with the aim of gaining a better understanding of the abilities of the new spatial verification methods to diagnose different types of forecast errors. The project focused on prescribed errors for quantitative precipitation forecasts over the central United States. The intercomparison led to a classification of spatial verification methods and a cataloging of their diagnostic capabilities, providing useful guidance to end users, model developers, and verification scientists. A decade later, NWP systems have continued to increase in resolution, including advances in high-resolution ensembles. This article describes the setup of a second phase of the verification intercomparison, called the Mesoscale Verification Intercomparison over Complex Terrain (MesoVICT). MesoVICT focuses on the application, capability, and enhancement of spatial verification methods to deterministic and ensemble forecasts of precipitation, wind, and temperature over complex terrain. Importantly, this phase also explores the issue of analysis uncertainty through the use of an ensemble of meteorological analyses.

Open access
Eric Gilleland, Gregor Skok, Barbara G. Brown, Barbara Casati, Manfred Dorninger, Marion P. Mittermaier, Nigel Roberts, and Laurence J. Wilson

Abstract

As part of the second phase of the spatial forecast verification intercomparison project (ICP), dubbed the Mesoscale Verification Intercomparison in Complex Terrain (MesoVICT) project, a new set of idealized test fields is prepared. This paper describes these new fields and their rationale and uses them to analyze a number of summary measures associated with distance and geometric-based approaches. The results provide guidance about how they inform about performance under various scenarios. The new case comparisons are grouped into four categories: (i) pathological situations such as when a variable is zero valued at all grid points; (ii) circular events aimed at evaluating how different methods handle contrived situations, such as equal but opposite translations, the presence of multiple events of same/different size, boundary effects, and the influence of the positioning of events in the domain; (iii) elliptical events representing simplified scenarios that mimic commonly encountered weather phenomena in complex terrain; and (iv) cases aimed at analyzing how the verification methods handle small-scale scattered events, very large events with holes (e.g., a small portion of clear sky on a cloudy overcast day), and the presence of noise in one or both fields. Results show that all analyzed measures perform poorly in the pathological setting. They are either not able to provide a result at all or they instigate a special rule to prescribe a value resulting in erratic results. The analysis also showed that methods provide similar information in many situations, but that each has its positive properties along with certain unique limitations.

Open access
Barbara Casati, Manfred Dorninger, Caio A. S. Coelho, Elizabeth E. Ebert, Chiara Marsigli, Marion P. Mittermaier, and Eric Gilleland

Abstract

The International Verification Methods Workshop was held online in November 2020 and included sessions on physical error characterization using process diagnostics and error tracking techniques; exploitation of data assimilation techniques in verification practices, e.g., to address representativeness issues and observation uncertainty; spatial verification methods and the Model Evaluation Tools, as unified reference verification software; and meta-verification and best practices for scores computation. The workshop reached out to diverse research communities working in the areas of high-impact weather, subseasonal to seasonal prediction, polar prediction, and sea ice and ocean prediction. This article summarizes the major outcomes of the workshop and outlines future strategic directions for verification research.

Full access
Paul Joe, Stella Melo, William R. Burrows, Barbara Casati, Robert W. Crawford, Armin Deghan, Gabrielle Gascon, Zen Mariani, Jason Milbrandt, and Kevin Strawbridge
Full access
Paul Joe, Stella Melo, William R. Burrows, Barbara Casati, Robert W. Crawford, Armin Deghan, Gabrielle Gascon, Zen Mariani, Jason Milbrandt, and Kevin Strawbridge

Abstract

The goal of the Canadian Arctic Weather Science (CAWS) project is to conduct research into the future operational monitoring and forecasting programs of Environment and Climate Change Canada in the Arctic where increased economic and recreational activities are expected with enhanced transportation and search and rescue requirements. Due to cost, remoteness and vast geographical coverage, the future monitoring concept includes a combination of space-based observations, sparse in situ surface measurements, and advanced reference sites. A prototype reference site has been established at Iqaluit, Nunavut (63°45'N, 68°33'W), that includes a Ka-band radar, water vapor lidars (both in-house and commercial versions), multiple Doppler lidars, ceilometers, radiation flux, and precipitation sensors. The scope of the project includes understanding of the polar processes, evaluating new technologies, validation of satellite products, validation of numerical weather prediction systems, development of warning products, and communication of their risk to a variety of users. This contribution will provide an overview of the CAWS project to show some preliminary results and to encourage collaborations.

Free access
Helge F. Goessling, Thomas Jung, Stefanie Klebe, Jenny Baeseman, Peter Bauer, Peter Chen, Matthieu Chevallier, Randall Dole, Neil Gordon, Paolo Ruti, Alice Bradley, David H. Bromwich, Barbara Casati, Dmitry Chechin, Jonathan J. Day, François Massonnet, Brian Mills, Ian Renfrew, Gregory Smith, and Renee Tatusko
Full access